Test Report: Docker_Linux_crio_arm64 22343

                    
                      72a35eba785b899784aeadb9114946ce54d68eef:2025-12-27:43008
                    
                

Test fail (31/332)

Order failed test Duration
29 TestAddons/serial/Volcano 0.34
35 TestAddons/parallel/Registry 15.92
36 TestAddons/parallel/RegistryCreds 0.52
37 TestAddons/parallel/Ingress 11.76
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 6.36
41 TestAddons/parallel/CSI 47.26
42 TestAddons/parallel/Headlamp 3.45
43 TestAddons/parallel/CloudSpanner 5.39
44 TestAddons/parallel/LocalPath 8.59
45 TestAddons/parallel/NvidiaDevicePlugin 6.26
46 TestAddons/parallel/Yakd 6.28
52 TestForceSystemdFlag 506.33
53 TestForceSystemdEnv 508.39
177 TestMultiControlPlane/serial/RestartCluster 487.37
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.2
179 TestMultiControlPlane/serial/AddSecondaryNode 2.11
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2.24
191 TestJSONOutput/pause/Command 2.42
197 TestJSONOutput/unpause/Command 1.64
261 TestPause/serial/Pause 9.07
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.44
307 TestStartStop/group/old-k8s-version/serial/Pause 6.72
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.4
318 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.8
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.7
329 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.63
336 TestStartStop/group/embed-certs/serial/Pause 8.08
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.35
344 TestStartStop/group/no-preload/serial/Pause 7.37
353 TestStartStop/group/newest-cni/serial/Pause 8.03
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable volcano --alsologtostderr -v=1: exit status 11 (344.368307ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:32:14.539788  306535 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:32:14.540567  306535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:14.540581  306535 out.go:374] Setting ErrFile to fd 2...
	I1227 09:32:14.540587  306535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:14.540879  306535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:32:14.541177  306535 mustload.go:66] Loading cluster: addons-716851
	I1227 09:32:14.541572  306535 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:14.541591  306535 addons.go:622] checking whether the cluster is paused
	I1227 09:32:14.541701  306535 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:14.541715  306535 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:32:14.542293  306535 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:32:14.583452  306535 ssh_runner.go:195] Run: systemctl --version
	I1227 09:32:14.583523  306535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:32:14.604996  306535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:32:14.710567  306535 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:32:14.710651  306535 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:32:14.740561  306535 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:32:14.740584  306535 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:32:14.740589  306535 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:32:14.740593  306535 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:32:14.740597  306535 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:32:14.740601  306535 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:32:14.740604  306535 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:32:14.740607  306535 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:32:14.740610  306535 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:32:14.740617  306535 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:32:14.740620  306535 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:32:14.740628  306535 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:32:14.740632  306535 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:32:14.740635  306535 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:32:14.740638  306535 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:32:14.740644  306535 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:32:14.740647  306535 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:32:14.740652  306535 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:32:14.740655  306535 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:32:14.740658  306535 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:32:14.740663  306535 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:32:14.740666  306535 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:32:14.740668  306535 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:32:14.740672  306535 cri.go:96] found id: ""
	I1227 09:32:14.740723  306535 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:32:14.756293  306535 out.go:203] 
	W1227 09:32:14.759202  306535 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:32:14.759235  306535 out.go:285] * 
	* 
	W1227 09:32:14.804556  306535 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:32:14.807715  306535 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 6.753052ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-sft95" [a8eceb49-0be8-43ec-acc2-d7ba8499ec65] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00505102s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-dqhx6" [8db0accc-ca73-43cb-8d99-605b48ac65d1] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003402898s
addons_test.go:394: (dbg) Run:  kubectl --context addons-716851 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-716851 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-716851 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.378631534s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 ip
2025/12/27 09:32:40 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable registry --alsologtostderr -v=1: exit status 11 (272.641691ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:32:40.805531  307081 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:32:40.806210  307081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:40.806225  307081 out.go:374] Setting ErrFile to fd 2...
	I1227 09:32:40.806233  307081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:40.806597  307081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:32:40.806925  307081 mustload.go:66] Loading cluster: addons-716851
	I1227 09:32:40.807342  307081 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:40.807364  307081 addons.go:622] checking whether the cluster is paused
	I1227 09:32:40.807544  307081 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:40.807566  307081 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:32:40.808215  307081 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:32:40.829000  307081 ssh_runner.go:195] Run: systemctl --version
	I1227 09:32:40.829885  307081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:32:40.848368  307081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:32:40.951465  307081 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:32:40.951551  307081 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:32:40.988181  307081 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:32:40.988207  307081 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:32:40.988213  307081 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:32:40.988218  307081 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:32:40.988221  307081 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:32:40.988225  307081 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:32:40.988228  307081 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:32:40.988231  307081 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:32:40.988234  307081 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:32:40.988251  307081 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:32:40.988257  307081 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:32:40.988261  307081 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:32:40.988264  307081 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:32:40.988267  307081 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:32:40.988270  307081 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:32:40.988276  307081 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:32:40.988279  307081 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:32:40.988283  307081 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:32:40.988286  307081 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:32:40.988289  307081 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:32:40.988294  307081 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:32:40.988298  307081 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:32:40.988306  307081 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:32:40.988310  307081 cri.go:96] found id: ""
	I1227 09:32:40.988363  307081 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:32:41.007886  307081 out.go:203] 
	W1227 09:32:41.011050  307081 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:32:41.011091  307081 out.go:285] * 
	* 
	W1227 09:32:41.013235  307081 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:32:41.016184  307081 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.92s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.061277ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-716851
addons_test.go:334: (dbg) Run:  kubectl --context addons-716851 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (290.016534ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:33:14.418853  308905 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:33:14.419632  308905 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:14.419646  308905 out.go:374] Setting ErrFile to fd 2...
	I1227 09:33:14.419652  308905 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:14.420909  308905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:33:14.421266  308905 mustload.go:66] Loading cluster: addons-716851
	I1227 09:33:14.421667  308905 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:14.421692  308905 addons.go:622] checking whether the cluster is paused
	I1227 09:33:14.421813  308905 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:14.421828  308905 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:33:14.422322  308905 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:33:14.440875  308905 ssh_runner.go:195] Run: systemctl --version
	I1227 09:33:14.440941  308905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:33:14.457943  308905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:33:14.555426  308905 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:33:14.555568  308905 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:33:14.597174  308905 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:33:14.597249  308905 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:33:14.597269  308905 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:33:14.597283  308905 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:33:14.597288  308905 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:33:14.597293  308905 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:33:14.597296  308905 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:33:14.597300  308905 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:33:14.597303  308905 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:33:14.597325  308905 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:33:14.597335  308905 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:33:14.597350  308905 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:33:14.597354  308905 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:33:14.597357  308905 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:33:14.597360  308905 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:33:14.597370  308905 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:33:14.597373  308905 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:33:14.597377  308905 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:33:14.597380  308905 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:33:14.597382  308905 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:33:14.597387  308905 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:33:14.597401  308905 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:33:14.597408  308905 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:33:14.597411  308905 cri.go:96] found id: ""
	I1227 09:33:14.597461  308905 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:33:14.612939  308905 out.go:203] 
	W1227 09:33:14.616024  308905 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:33:14.616052  308905 out.go:285] * 
	* 
	W1227 09:33:14.618097  308905 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:33:14.621117  308905 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-716851 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-716851 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-716851 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [c8ef6c8b-8a77-417a-86a8-c0b8d36e69a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [c8ef6c8b-8a77-417a-86a8-c0b8d36e69a4] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003883002s
I1227 09:33:12.135718  299811 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-716851 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (318.143367ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:33:13.598418  308780 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:33:13.599177  308780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:13.599206  308780 out.go:374] Setting ErrFile to fd 2...
	I1227 09:33:13.599227  308780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:13.599534  308780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:33:13.599847  308780 mustload.go:66] Loading cluster: addons-716851
	I1227 09:33:13.600287  308780 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:13.600327  308780 addons.go:622] checking whether the cluster is paused
	I1227 09:33:13.600459  308780 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:13.600493  308780 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:33:13.601058  308780 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:33:13.634090  308780 ssh_runner.go:195] Run: systemctl --version
	I1227 09:33:13.634151  308780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:33:13.663047  308780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:33:13.772961  308780 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:33:13.773057  308780 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:33:13.817482  308780 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:33:13.817502  308780 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:33:13.817508  308780 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:33:13.817512  308780 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:33:13.817515  308780 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:33:13.817521  308780 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:33:13.817524  308780 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:33:13.817527  308780 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:33:13.817535  308780 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:33:13.817541  308780 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:33:13.817544  308780 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:33:13.817547  308780 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:33:13.817550  308780 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:33:13.817553  308780 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:33:13.817556  308780 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:33:13.817561  308780 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:33:13.817564  308780 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:33:13.817567  308780 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:33:13.817570  308780 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:33:13.817573  308780 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:33:13.817577  308780 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:33:13.817580  308780 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:33:13.817583  308780 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:33:13.817586  308780 cri.go:96] found id: ""
	I1227 09:33:13.817636  308780 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:33:13.837433  308780 out.go:203] 
	W1227 09:33:13.840498  308780 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:33:13.840570  308780 out.go:285] * 
	* 
	W1227 09:33:13.842628  308780 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:33:13.845646  308780 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable ingress --alsologtostderr -v=1: exit status 11 (256.878507ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:33:13.910211  308844 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:33:13.911095  308844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:13.911117  308844 out.go:374] Setting ErrFile to fd 2...
	I1227 09:33:13.911124  308844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:13.911470  308844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:33:13.911857  308844 mustload.go:66] Loading cluster: addons-716851
	I1227 09:33:13.912379  308844 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:13.912404  308844 addons.go:622] checking whether the cluster is paused
	I1227 09:33:13.912576  308844 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:13.912595  308844 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:33:13.913594  308844 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:33:13.931189  308844 ssh_runner.go:195] Run: systemctl --version
	I1227 09:33:13.931267  308844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:33:13.950691  308844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:33:14.051270  308844 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:33:14.051359  308844 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:33:14.080386  308844 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:33:14.080409  308844 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:33:14.080415  308844 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:33:14.080419  308844 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:33:14.080423  308844 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:33:14.080426  308844 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:33:14.080430  308844 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:33:14.080433  308844 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:33:14.080436  308844 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:33:14.080446  308844 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:33:14.080450  308844 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:33:14.080454  308844 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:33:14.080457  308844 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:33:14.080461  308844 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:33:14.080465  308844 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:33:14.080470  308844 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:33:14.080473  308844 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:33:14.080477  308844 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:33:14.080480  308844 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:33:14.080482  308844 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:33:14.080487  308844 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:33:14.080489  308844 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:33:14.080492  308844 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:33:14.080495  308844 cri.go:96] found id: ""
	I1227 09:33:14.080547  308844 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:33:14.095630  308844 out.go:203] 
	W1227 09:33:14.098591  308844 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:33:14.098614  308844 out.go:285] * 
	* 
	W1227 09:33:14.100624  308844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:33:14.103472  308844 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (11.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-zv99m" [7b498d15-e73b-466f-8990-5a80fd75691a] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003929308s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (294.87724ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:33:02.102446  308124 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:33:02.103314  308124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:02.103352  308124 out.go:374] Setting ErrFile to fd 2...
	I1227 09:33:02.103375  308124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:02.103731  308124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:33:02.104094  308124 mustload.go:66] Loading cluster: addons-716851
	I1227 09:33:02.104511  308124 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:02.104603  308124 addons.go:622] checking whether the cluster is paused
	I1227 09:33:02.104749  308124 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:02.104777  308124 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:33:02.105337  308124 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:33:02.142513  308124 ssh_runner.go:195] Run: systemctl --version
	I1227 09:33:02.142589  308124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:33:02.168248  308124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:33:02.274935  308124 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:33:02.275058  308124 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:33:02.314251  308124 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:33:02.314284  308124 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:33:02.314289  308124 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:33:02.314292  308124 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:33:02.314295  308124 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:33:02.314324  308124 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:33:02.314334  308124 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:33:02.314338  308124 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:33:02.314341  308124 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:33:02.314370  308124 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:33:02.314381  308124 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:33:02.314385  308124 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:33:02.314388  308124 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:33:02.314391  308124 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:33:02.314405  308124 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:33:02.314417  308124 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:33:02.314421  308124 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:33:02.314439  308124 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:33:02.314444  308124 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:33:02.314452  308124 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:33:02.314458  308124 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:33:02.314462  308124 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:33:02.314468  308124 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:33:02.314472  308124 cri.go:96] found id: ""
	I1227 09:33:02.314538  308124 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:33:02.335105  308124 out.go:203] 
	W1227 09:33:02.338200  308124 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:33:02.338260  308124 out.go:285] * 
	* 
	W1227 09:33:02.340328  308124 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:33:02.343352  308124 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 10.729588ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-wmzpf" [5d4fac15-b100-4825-93f5-1f7955b9f601] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004988047s
addons_test.go:465: (dbg) Run:  kubectl --context addons-716851 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (256.262648ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:32:55.840730  308015 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:32:55.841563  308015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:55.841577  308015 out.go:374] Setting ErrFile to fd 2...
	I1227 09:32:55.841583  308015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:55.841869  308015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:32:55.842166  308015 mustload.go:66] Loading cluster: addons-716851
	I1227 09:32:55.842577  308015 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:55.842601  308015 addons.go:622] checking whether the cluster is paused
	I1227 09:32:55.842716  308015 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:55.842732  308015 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:32:55.843275  308015 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:32:55.865622  308015 ssh_runner.go:195] Run: systemctl --version
	I1227 09:32:55.865758  308015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:32:55.883737  308015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:32:55.982925  308015 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:32:55.983023  308015 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:32:56.021118  308015 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:32:56.021150  308015 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:32:56.021156  308015 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:32:56.021160  308015 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:32:56.021163  308015 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:32:56.021167  308015 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:32:56.021171  308015 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:32:56.021174  308015 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:32:56.021178  308015 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:32:56.021184  308015 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:32:56.021197  308015 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:32:56.021204  308015 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:32:56.021207  308015 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:32:56.021211  308015 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:32:56.021222  308015 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:32:56.021232  308015 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:32:56.021236  308015 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:32:56.021239  308015 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:32:56.021242  308015 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:32:56.021246  308015 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:32:56.021256  308015 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:32:56.021260  308015 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:32:56.021263  308015 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:32:56.021266  308015 cri.go:96] found id: ""
	I1227 09:32:56.021333  308015 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:32:56.036831  308015 out.go:203] 
	W1227 09:32:56.039742  308015 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:32:56.039774  308015 out.go:285] * 
	* 
	W1227 09:32:56.041905  308015 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:32:56.045021  308015 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 09:32:46.416622  299811 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 09:32:46.421975  299811 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 09:32:46.422012  299811 kapi.go:107] duration metric: took 5.410624ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 5.422857ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-716851 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-716851 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [39d64ae2-1e5b-4098-abad-ce067cfd8b87] Pending
helpers_test.go:353: "task-pv-pod" [39d64ae2-1e5b-4098-abad-ce067cfd8b87] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [39d64ae2-1e5b-4098-abad-ce067cfd8b87] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004356942s
addons_test.go:574: (dbg) Run:  kubectl --context addons-716851 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-716851 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-716851 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-716851 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-716851 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-716851 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-716851 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [604d06a6-e4fe-4159-b464-b4bb5bb343c5] Pending
helpers_test.go:353: "task-pv-pod-restore" [604d06a6-e4fe-4159-b464-b4bb5bb343c5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [604d06a6-e4fe-4159-b464-b4bb5bb343c5] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003213826s
addons_test.go:616: (dbg) Run:  kubectl --context addons-716851 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-716851 delete pod task-pv-pod-restore: (1.20168275s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-716851 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-716851 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (259.265653ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:33:33.187393  309232 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:33:33.188283  309232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:33.188404  309232 out.go:374] Setting ErrFile to fd 2...
	I1227 09:33:33.188428  309232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:33.188890  309232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:33:33.189275  309232 mustload.go:66] Loading cluster: addons-716851
	I1227 09:33:33.189983  309232 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:33.190035  309232 addons.go:622] checking whether the cluster is paused
	I1227 09:33:33.190203  309232 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:33.190240  309232 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:33:33.191049  309232 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:33:33.210549  309232 ssh_runner.go:195] Run: systemctl --version
	I1227 09:33:33.210609  309232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:33:33.230556  309232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:33:33.334835  309232 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:33:33.334948  309232 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:33:33.370035  309232 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:33:33.370111  309232 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:33:33.370125  309232 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:33:33.370130  309232 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:33:33.370134  309232 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:33:33.370138  309232 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:33:33.370141  309232 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:33:33.370144  309232 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:33:33.370147  309232 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:33:33.370168  309232 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:33:33.370190  309232 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:33:33.370208  309232 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:33:33.370227  309232 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:33:33.370231  309232 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:33:33.370234  309232 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:33:33.370248  309232 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:33:33.370252  309232 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:33:33.370257  309232 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:33:33.370260  309232 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:33:33.370263  309232 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:33:33.370268  309232 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:33:33.370285  309232 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:33:33.370294  309232 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:33:33.370297  309232 cri.go:96] found id: ""
	I1227 09:33:33.370349  309232 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:33:33.385456  309232 out.go:203] 
	W1227 09:33:33.388500  309232 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:33:33.388539  309232 out.go:285] * 
	* 
	W1227 09:33:33.390595  309232 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:33:33.393645  309232 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (270.452613ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:33:33.454997  309276 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:33:33.455825  309276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:33.455865  309276 out.go:374] Setting ErrFile to fd 2...
	I1227 09:33:33.455886  309276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:33.456210  309276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:33:33.456548  309276 mustload.go:66] Loading cluster: addons-716851
	I1227 09:33:33.457004  309276 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:33.457054  309276 addons.go:622] checking whether the cluster is paused
	I1227 09:33:33.457187  309276 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:33:33.457225  309276 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:33:33.457764  309276 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:33:33.475607  309276 ssh_runner.go:195] Run: systemctl --version
	I1227 09:33:33.475663  309276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:33:33.496165  309276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:33:33.602477  309276 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:33:33.602561  309276 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:33:33.637927  309276 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:33:33.637950  309276 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:33:33.637956  309276 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:33:33.637960  309276 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:33:33.637963  309276 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:33:33.637967  309276 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:33:33.637970  309276 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:33:33.637974  309276 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:33:33.637977  309276 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:33:33.637990  309276 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:33:33.637999  309276 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:33:33.638002  309276 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:33:33.638005  309276 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:33:33.638008  309276 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:33:33.638011  309276 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:33:33.638017  309276 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:33:33.638021  309276 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:33:33.638025  309276 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:33:33.638028  309276 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:33:33.638031  309276 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:33:33.638036  309276 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:33:33.638043  309276 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:33:33.638046  309276 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:33:33.638053  309276 cri.go:96] found id: ""
	I1227 09:33:33.638109  309276 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:33:33.658236  309276 out.go:203] 
	W1227 09:33:33.661050  309276 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:33:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:33:33.661113  309276 out.go:285] * 
	* 
	W1227 09:33:33.663123  309276 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:33:33.665984  309276 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (47.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-716851 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-716851 --alsologtostderr -v=1: exit status 11 (396.889999ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:32:46.327553  307393 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:32:46.330009  307393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:46.330031  307393 out.go:374] Setting ErrFile to fd 2...
	I1227 09:32:46.330038  307393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:46.330344  307393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:32:46.330686  307393 mustload.go:66] Loading cluster: addons-716851
	I1227 09:32:46.331071  307393 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:46.331089  307393 addons.go:622] checking whether the cluster is paused
	I1227 09:32:46.331212  307393 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:46.331224  307393 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:32:46.331767  307393 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:32:46.374417  307393 ssh_runner.go:195] Run: systemctl --version
	I1227 09:32:46.374480  307393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:32:46.404179  307393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:32:46.533673  307393 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:32:46.533772  307393 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:32:46.595125  307393 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:32:46.595150  307393 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:32:46.595155  307393 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:32:46.595160  307393 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:32:46.595163  307393 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:32:46.595167  307393 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:32:46.595170  307393 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:32:46.595184  307393 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:32:46.595188  307393 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:32:46.595202  307393 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:32:46.595206  307393 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:32:46.595209  307393 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:32:46.595212  307393 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:32:46.595215  307393 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:32:46.595218  307393 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:32:46.595223  307393 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:32:46.595227  307393 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:32:46.595230  307393 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:32:46.595233  307393 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:32:46.595236  307393 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:32:46.595243  307393 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:32:46.595254  307393 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:32:46.595257  307393 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:32:46.595259  307393 cri.go:96] found id: ""
	I1227 09:32:46.595314  307393 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:32:46.616394  307393 out.go:203] 
	W1227 09:32:46.619733  307393 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:32:46.619769  307393 out.go:285] * 
	* 
	W1227 09:32:46.621760  307393 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:32:46.624928  307393 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-716851 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-716851
helpers_test.go:244: (dbg) docker inspect addons-716851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd50287978e2f660a8499c3f3df283d7c72a30ddc502b6f90f9d306958042807",
	        "Created": "2025-12-27T09:30:16.183910199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:30:16.254617597Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/dd50287978e2f660a8499c3f3df283d7c72a30ddc502b6f90f9d306958042807/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd50287978e2f660a8499c3f3df283d7c72a30ddc502b6f90f9d306958042807/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd50287978e2f660a8499c3f3df283d7c72a30ddc502b6f90f9d306958042807/hosts",
	        "LogPath": "/var/lib/docker/containers/dd50287978e2f660a8499c3f3df283d7c72a30ddc502b6f90f9d306958042807/dd50287978e2f660a8499c3f3df283d7c72a30ddc502b6f90f9d306958042807-json.log",
	        "Name": "/addons-716851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-716851:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-716851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd50287978e2f660a8499c3f3df283d7c72a30ddc502b6f90f9d306958042807",
	                "LowerDir": "/var/lib/docker/overlay2/d7b7ac3768acdbd0dc3b3b2d23f838dcdacd44f7402e854801d79944edd1a287-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7b7ac3768acdbd0dc3b3b2d23f838dcdacd44f7402e854801d79944edd1a287/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7b7ac3768acdbd0dc3b3b2d23f838dcdacd44f7402e854801d79944edd1a287/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7b7ac3768acdbd0dc3b3b2d23f838dcdacd44f7402e854801d79944edd1a287/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-716851",
	                "Source": "/var/lib/docker/volumes/addons-716851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-716851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-716851",
	                "name.minikube.sigs.k8s.io": "addons-716851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8559a607ffd5239a2ee734f6f555ab10eb0070da5d000d03596b29a14cde736e",
	            "SandboxKey": "/var/run/docker/netns/8559a607ffd5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-716851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:a9:b8:2f:52:6e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95a320f8716e5c3c981cb30da05c00f681ae9e6e607b4430fa5985519b8e9669",
	                    "EndpointID": "9696ed6d2c73d67ee4a240a3505e74048a0708ce03c64d5bdae714dd63695843",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-716851",
	                        "dd50287978e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-716851 -n addons-716851
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-716851 logs -n 25: (1.521909984s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-259204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-259204   │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:29 UTC │
	│ delete  │ -p download-only-259204                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-259204   │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:29 UTC │
	│ start   │ -o=json --download-only -p download-only-787419 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-787419   │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:29 UTC │
	│ delete  │ -p download-only-787419                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-787419   │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:29 UTC │
	│ delete  │ -p download-only-259204                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-259204   │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:29 UTC │
	│ delete  │ -p download-only-787419                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-787419   │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:29 UTC │
	│ start   │ --download-only -p download-docker-725790 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-725790 │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │                     │
	│ delete  │ -p download-docker-725790                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-725790 │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:29 UTC │
	│ start   │ --download-only -p binary-mirror-963946 --alsologtostderr --binary-mirror http://127.0.0.1:44187 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-963946   │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │                     │
	│ delete  │ -p binary-mirror-963946                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-963946   │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:29 UTC │
	│ addons  │ disable dashboard -p addons-716851                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-716851                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │                     │
	│ start   │ -p addons-716851 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:32 UTC │
	│ addons  │ addons-716851 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │                     │
	│ addons  │ addons-716851 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │                     │
	│ addons  │ addons-716851 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │                     │
	│ addons  │ addons-716851 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │                     │
	│ ip      │ addons-716851 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:32 UTC │
	│ addons  │ addons-716851 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │                     │
	│ ssh     │ addons-716851 ssh cat /opt/local-path-provisioner/pvc-996fb562-ecfc-48a4-90f6-7b63693bb059_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │ 27 Dec 25 09:32 UTC │
	│ addons  │ addons-716851 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │                     │
	│ addons  │ addons-716851 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │                     │
	│ addons  │ enable headlamp -p addons-716851 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-716851          │ jenkins │ v1.37.0 │ 27 Dec 25 09:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:29:49
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:29:49.972451  300568 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:29:49.972681  300568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:49.972709  300568 out.go:374] Setting ErrFile to fd 2...
	I1227 09:29:49.972730  300568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:49.973012  300568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:29:49.973564  300568 out.go:368] Setting JSON to false
	I1227 09:29:49.974416  300568 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4343,"bootTime":1766823447,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:29:49.974515  300568 start.go:143] virtualization:  
	I1227 09:29:49.978013  300568 out.go:179] * [addons-716851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:29:49.980977  300568 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:29:49.981050  300568 notify.go:221] Checking for updates...
	I1227 09:29:49.986997  300568 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:29:49.989967  300568 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:29:49.992751  300568 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 09:29:49.995613  300568 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:29:49.998582  300568 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:29:50.001815  300568 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:29:50.041760  300568 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:29:50.041894  300568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:29:50.103930  300568 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-27 09:29:50.094187578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:29:50.104057  300568 docker.go:319] overlay module found
	I1227 09:29:50.107221  300568 out.go:179] * Using the docker driver based on user configuration
	I1227 09:29:50.110110  300568 start.go:309] selected driver: docker
	I1227 09:29:50.110127  300568 start.go:928] validating driver "docker" against <nil>
	I1227 09:29:50.110143  300568 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:29:50.110907  300568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:29:50.178361  300568 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-27 09:29:50.169284038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:29:50.178513  300568 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:29:50.178738  300568 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:29:50.181648  300568 out.go:179] * Using Docker driver with root privileges
	I1227 09:29:50.184410  300568 cni.go:84] Creating CNI manager for ""
	I1227 09:29:50.184494  300568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:29:50.184511  300568 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:29:50.184593  300568 start.go:353] cluster config:
	{Name:addons-716851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-716851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:29:50.189660  300568 out.go:179] * Starting "addons-716851" primary control-plane node in "addons-716851" cluster
	I1227 09:29:50.192352  300568 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:29:50.195139  300568 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:29:50.198020  300568 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:29:50.198067  300568 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:29:50.198095  300568 cache.go:65] Caching tarball of preloaded images
	I1227 09:29:50.198103  300568 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:29:50.198192  300568 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:29:50.198203  300568 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:29:50.198577  300568 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/config.json ...
	I1227 09:29:50.198598  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/config.json: {Name:mk3b780a46e2decb996f754ca1640e47cc3a9e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:29:50.214030  300568 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:29:50.214174  300568 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 09:29:50.214199  300568 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory, skipping pull
	I1227 09:29:50.214204  300568 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in cache, skipping pull
	I1227 09:29:50.214216  300568 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a as a tarball
	I1227 09:29:50.214223  300568 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from local cache
	I1227 09:30:09.011527  300568 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a from cached tarball
	I1227 09:30:09.011574  300568 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:30:09.011629  300568 start.go:360] acquireMachinesLock for addons-716851: {Name:mk5b7d8972d212840a4bdd1d59c59f54d663d712 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:30:09.011780  300568 start.go:364] duration metric: took 125.776µs to acquireMachinesLock for "addons-716851"
	I1227 09:30:09.011810  300568 start.go:93] Provisioning new machine with config: &{Name:addons-716851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-716851 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:30:09.011896  300568 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:30:09.015450  300568 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1227 09:30:09.015747  300568 start.go:159] libmachine.API.Create for "addons-716851" (driver="docker")
	I1227 09:30:09.015788  300568 client.go:173] LocalClient.Create starting
	I1227 09:30:09.015908  300568 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 09:30:09.314510  300568 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 09:30:09.830411  300568 cli_runner.go:164] Run: docker network inspect addons-716851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:30:09.847096  300568 cli_runner.go:211] docker network inspect addons-716851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:30:09.847203  300568 network_create.go:284] running [docker network inspect addons-716851] to gather additional debugging logs...
	I1227 09:30:09.847274  300568 cli_runner.go:164] Run: docker network inspect addons-716851
	W1227 09:30:09.863604  300568 cli_runner.go:211] docker network inspect addons-716851 returned with exit code 1
	I1227 09:30:09.863640  300568 network_create.go:287] error running [docker network inspect addons-716851]: docker network inspect addons-716851: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-716851 not found
	I1227 09:30:09.863653  300568 network_create.go:289] output of [docker network inspect addons-716851]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-716851 not found
	
	** /stderr **
	I1227 09:30:09.863753  300568 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:30:09.880733  300568 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bc2300}
	I1227 09:30:09.880776  300568 network_create.go:124] attempt to create docker network addons-716851 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1227 09:30:09.880841  300568 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-716851 addons-716851
	I1227 09:30:09.935490  300568 network_create.go:108] docker network addons-716851 192.168.49.0/24 created
	I1227 09:30:09.935528  300568 kic.go:121] calculated static IP "192.168.49.2" for the "addons-716851" container
	I1227 09:30:09.935605  300568 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:30:09.950821  300568 cli_runner.go:164] Run: docker volume create addons-716851 --label name.minikube.sigs.k8s.io=addons-716851 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:30:09.968485  300568 oci.go:103] Successfully created a docker volume addons-716851
	I1227 09:30:09.968578  300568 cli_runner.go:164] Run: docker run --rm --name addons-716851-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-716851 --entrypoint /usr/bin/test -v addons-716851:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:30:12.238872  300568 cli_runner.go:217] Completed: docker run --rm --name addons-716851-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-716851 --entrypoint /usr/bin/test -v addons-716851:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (2.270245521s)
	I1227 09:30:12.238914  300568 oci.go:107] Successfully prepared a docker volume addons-716851
	I1227 09:30:12.238964  300568 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:30:12.238981  300568 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:30:12.239048  300568 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-716851:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:30:16.102919  300568 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-716851:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.863829966s)
	I1227 09:30:16.102956  300568 kic.go:203] duration metric: took 3.863971889s to extract preloaded images to volume ...
	W1227 09:30:16.103102  300568 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:30:16.103221  300568 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:30:16.167336  300568 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-716851 --name addons-716851 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-716851 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-716851 --network addons-716851 --ip 192.168.49.2 --volume addons-716851:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:30:16.477735  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Running}}
	I1227 09:30:16.498363  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:16.521806  300568 cli_runner.go:164] Run: docker exec addons-716851 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:30:16.571824  300568 oci.go:144] the created container "addons-716851" has a running status.
	I1227 09:30:16.571867  300568 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa...
	I1227 09:30:17.008238  300568 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:30:17.041938  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:17.064451  300568 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:30:17.064477  300568 kic_runner.go:114] Args: [docker exec --privileged addons-716851 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:30:17.119766  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:17.136841  300568 machine.go:94] provisionDockerMachine start ...
	I1227 09:30:17.136959  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:17.153269  300568 main.go:144] libmachine: Using SSH client type: native
	I1227 09:30:17.153691  300568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1227 09:30:17.153707  300568 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:30:17.154394  300568 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:30:20.295705  300568 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-716851
	
	I1227 09:30:20.295731  300568 ubuntu.go:182] provisioning hostname "addons-716851"
	I1227 09:30:20.295797  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:20.313195  300568 main.go:144] libmachine: Using SSH client type: native
	I1227 09:30:20.313504  300568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1227 09:30:20.313520  300568 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-716851 && echo "addons-716851" | sudo tee /etc/hostname
	I1227 09:30:20.464796  300568 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-716851
	
	I1227 09:30:20.464874  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:20.481924  300568 main.go:144] libmachine: Using SSH client type: native
	I1227 09:30:20.482235  300568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1227 09:30:20.482261  300568 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-716851' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-716851/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-716851' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:30:20.620585  300568 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:30:20.620654  300568 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:30:20.620696  300568 ubuntu.go:190] setting up certificates
	I1227 09:30:20.620734  300568 provision.go:84] configureAuth start
	I1227 09:30:20.620817  300568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-716851
	I1227 09:30:20.637128  300568 provision.go:143] copyHostCerts
	I1227 09:30:20.637224  300568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:30:20.637346  300568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:30:20.637406  300568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:30:20.637462  300568 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.addons-716851 san=[127.0.0.1 192.168.49.2 addons-716851 localhost minikube]
	I1227 09:30:20.940339  300568 provision.go:177] copyRemoteCerts
	I1227 09:30:20.940409  300568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:30:20.940449  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:20.958907  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:21.055777  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:30:21.073508  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 09:30:21.090682  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:30:21.108504  300568 provision.go:87] duration metric: took 487.736485ms to configureAuth
	I1227 09:30:21.108539  300568 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:30:21.108734  300568 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:30:21.108837  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:21.125821  300568 main.go:144] libmachine: Using SSH client type: native
	I1227 09:30:21.126132  300568 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1227 09:30:21.126147  300568 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:30:21.415856  300568 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:30:21.415884  300568 machine.go:97] duration metric: took 4.279016879s to provisionDockerMachine
	I1227 09:30:21.415895  300568 client.go:176] duration metric: took 12.400097104s to LocalClient.Create
	I1227 09:30:21.415908  300568 start.go:167] duration metric: took 12.400164936s to libmachine.API.Create "addons-716851"
	I1227 09:30:21.415927  300568 start.go:293] postStartSetup for "addons-716851" (driver="docker")
	I1227 09:30:21.415941  300568 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:30:21.416040  300568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:30:21.416089  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:21.434465  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:21.536436  300568 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:30:21.539954  300568 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:30:21.540004  300568 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:30:21.540017  300568 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:30:21.540090  300568 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:30:21.540113  300568 start.go:296] duration metric: took 124.174801ms for postStartSetup
	I1227 09:30:21.540449  300568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-716851
	I1227 09:30:21.557309  300568 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/config.json ...
	I1227 09:30:21.557601  300568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:30:21.557655  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:21.574880  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:21.673290  300568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:30:21.678069  300568 start.go:128] duration metric: took 12.666156463s to createHost
	I1227 09:30:21.678096  300568 start.go:83] releasing machines lock for "addons-716851", held for 12.666305426s
	I1227 09:30:21.678177  300568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-716851
	I1227 09:30:21.695863  300568 ssh_runner.go:195] Run: cat /version.json
	I1227 09:30:21.695878  300568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:30:21.695920  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:21.695947  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:21.713616  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:21.725709  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:21.900798  300568 ssh_runner.go:195] Run: systemctl --version
	I1227 09:30:21.907481  300568 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:30:21.944816  300568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:30:21.949257  300568 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:30:21.949346  300568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:30:21.977979  300568 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:30:21.978006  300568 start.go:496] detecting cgroup driver to use...
	I1227 09:30:21.978048  300568 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:30:21.978108  300568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:30:21.995682  300568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:30:22.011260  300568 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:30:22.011435  300568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:30:22.031740  300568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:30:22.053525  300568 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:30:22.171313  300568 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:30:22.297404  300568 docker.go:234] disabling docker service ...
	I1227 09:30:22.297514  300568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:30:22.318933  300568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:30:22.334064  300568 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:30:22.455414  300568 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:30:22.577104  300568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:30:22.589977  300568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:30:22.603426  300568 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:30:22.603513  300568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:30:22.611892  300568 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:30:22.612003  300568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:30:22.620846  300568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:30:22.629310  300568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:30:22.638073  300568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:30:22.646256  300568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:30:22.654738  300568 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:30:22.668091  300568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:30:22.677083  300568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:30:22.684641  300568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:30:22.692123  300568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:30:22.820911  300568 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:30:22.988483  300568 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:30:22.988614  300568 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:30:22.992418  300568 start.go:574] Will wait 60s for crictl version
	I1227 09:30:22.992526  300568 ssh_runner.go:195] Run: which crictl
	I1227 09:30:22.995867  300568 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:30:23.037838  300568 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:30:23.037984  300568 ssh_runner.go:195] Run: crio --version
	I1227 09:30:23.066435  300568 ssh_runner.go:195] Run: crio --version
	I1227 09:30:23.098701  300568 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:30:23.101432  300568 cli_runner.go:164] Run: docker network inspect addons-716851 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:30:23.118227  300568 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:30:23.122300  300568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:30:23.132223  300568 kubeadm.go:884] updating cluster {Name:addons-716851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-716851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:30:23.132347  300568 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:30:23.132428  300568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:30:23.173486  300568 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:30:23.173507  300568 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:30:23.173563  300568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:30:23.203298  300568 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:30:23.203363  300568 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:30:23.203387  300568 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 09:30:23.203494  300568 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-716851 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-716851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:30:23.203608  300568 ssh_runner.go:195] Run: crio config
	I1227 09:30:23.263320  300568 cni.go:84] Creating CNI manager for ""
	I1227 09:30:23.263401  300568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:30:23.263433  300568 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:30:23.263478  300568 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-716851 NodeName:addons-716851 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:30:23.263628  300568 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-716851"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:30:23.263725  300568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:30:23.271671  300568 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:30:23.271761  300568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:30:23.279769  300568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 09:30:23.293400  300568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:30:23.306976  300568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1227 09:30:23.323193  300568 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:30:23.327239  300568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:30:23.337228  300568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:30:23.456105  300568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:30:23.472518  300568 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851 for IP: 192.168.49.2
	I1227 09:30:23.472547  300568 certs.go:195] generating shared ca certs ...
	I1227 09:30:23.472564  300568 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:23.472714  300568 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:30:23.745138  300568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt ...
	I1227 09:30:23.745176  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt: {Name:mkc9cb3dfd3a8a9f3fe1017dd9d06aedd48ddfdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:23.745422  300568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key ...
	I1227 09:30:23.745439  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key: {Name:mk7adc53cbf45a8c1d5e8c9a4e8d1d55a2581489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:23.745539  300568 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:30:23.992860  300568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt ...
	I1227 09:30:23.992896  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt: {Name:mk04c3e525f06fd46d7dbb1a3986bcc4152463cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:23.993078  300568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key ...
	I1227 09:30:23.993094  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key: {Name:mk4dc5c9a010dcabe22ab51afe921578835c8328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:23.993180  300568 certs.go:257] generating profile certs ...
	I1227 09:30:23.993244  300568 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.key
	I1227 09:30:23.993263  300568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt with IP's: []
	I1227 09:30:24.175898  300568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt ...
	I1227 09:30:24.175930  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: {Name:mk9d00093eb343af32157170346f3051ca959b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:24.176126  300568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.key ...
	I1227 09:30:24.176140  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.key: {Name:mkd88d1c27b5bc90b8ed35ede891ce767da55c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:24.176226  300568 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.key.5c3632e3
	I1227 09:30:24.176246  300568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.crt.5c3632e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1227 09:30:24.350977  300568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.crt.5c3632e3 ...
	I1227 09:30:24.351013  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.crt.5c3632e3: {Name:mk12ec51a81f3a314dad022897249f8f229a3c1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:24.351197  300568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.key.5c3632e3 ...
	I1227 09:30:24.351215  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.key.5c3632e3: {Name:mk15d68b784c4d5ae078463b4fd485b3a8fefe59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:24.351302  300568 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.crt.5c3632e3 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.crt
	I1227 09:30:24.351377  300568 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.key.5c3632e3 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.key
	I1227 09:30:24.351433  300568 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/proxy-client.key
	I1227 09:30:24.351460  300568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/proxy-client.crt with IP's: []
	I1227 09:30:24.552633  300568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/proxy-client.crt ...
	I1227 09:30:24.552664  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/proxy-client.crt: {Name:mk46d1c38b0e918da56e6b48956d6e0c671891dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:24.552845  300568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/proxy-client.key ...
	I1227 09:30:24.552862  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/proxy-client.key: {Name:mk269c73ace26756d7fff0bdeca7678f010cc56c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:24.553071  300568 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:30:24.553119  300568 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:30:24.553151  300568 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:30:24.553187  300568 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:30:24.553825  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:30:24.573281  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:30:24.592303  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:30:24.610616  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:30:24.628531  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 09:30:24.647192  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:30:24.665821  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:30:24.683794  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:30:24.702700  300568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:30:24.722037  300568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:30:24.735274  300568 ssh_runner.go:195] Run: openssl version
	I1227 09:30:24.741778  300568 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:30:24.749442  300568 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:30:24.757014  300568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:30:24.760904  300568 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:30:24.760974  300568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:30:24.802005  300568 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:30:24.809687  300568 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:30:24.816959  300568 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:30:24.820492  300568 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:30:24.820539  300568 kubeadm.go:401] StartCluster: {Name:addons-716851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-716851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:30:24.820624  300568 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:30:24.820687  300568 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:30:24.846302  300568 cri.go:96] found id: ""
	I1227 09:30:24.846385  300568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:30:24.854559  300568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:30:24.862724  300568 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:30:24.862795  300568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:30:24.871262  300568 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:30:24.871285  300568 kubeadm.go:158] found existing configuration files:
	
	I1227 09:30:24.871370  300568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:30:24.879211  300568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:30:24.879292  300568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:30:24.886796  300568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:30:24.894834  300568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:30:24.894945  300568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:30:24.902403  300568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:30:24.910485  300568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:30:24.910571  300568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:30:24.918124  300568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:30:24.926024  300568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:30:24.926137  300568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:30:24.934006  300568 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:30:24.996432  300568 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:30:24.996831  300568 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:30:25.099721  300568 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:30:25.099853  300568 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:30:25.099918  300568 kubeadm.go:319] OS: Linux
	I1227 09:30:25.100013  300568 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:30:25.100099  300568 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:30:25.100175  300568 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:30:25.100248  300568 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:30:25.100326  300568 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:30:25.100400  300568 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:30:25.100477  300568 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:30:25.100556  300568 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:30:25.100628  300568 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:30:25.170129  300568 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:30:25.170296  300568 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:30:25.170423  300568 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:30:25.180017  300568 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:30:25.183801  300568 out.go:252]   - Generating certificates and keys ...
	I1227 09:30:25.183950  300568 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:30:25.184087  300568 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:30:25.748806  300568 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:30:25.924486  300568 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:30:26.190074  300568 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:30:26.563544  300568 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:30:26.829102  300568 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:30:26.829480  300568 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-716851 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1227 09:30:27.314479  300568 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:30:27.314845  300568 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-716851 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1227 09:30:27.530604  300568 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:30:27.945350  300568 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:30:28.287600  300568 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:30:28.287904  300568 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:30:28.616612  300568 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:30:28.792914  300568 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:30:29.118199  300568 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:30:29.780781  300568 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:30:30.109255  300568 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:30:30.109356  300568 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:30:30.120520  300568 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:30:30.124088  300568 out.go:252]   - Booting up control plane ...
	I1227 09:30:30.124215  300568 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:30:30.124295  300568 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:30:30.124362  300568 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:30:30.152127  300568 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:30:30.152517  300568 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:30:30.162065  300568 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:30:30.162440  300568 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:30:30.162490  300568 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:30:30.294964  300568 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:30:30.295086  300568 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:30:32.291089  300568 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000877644s
	I1227 09:30:32.295606  300568 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 09:30:32.295699  300568 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1227 09:30:32.296004  300568 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 09:30:32.296090  300568 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 09:30:34.308264  300568 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.012148966s
	I1227 09:30:35.864918  300568 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.569248955s
	I1227 09:30:37.797766  300568 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501850621s
	I1227 09:30:37.833572  300568 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 09:30:37.852342  300568 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 09:30:37.869692  300568 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 09:30:37.870123  300568 kubeadm.go:319] [mark-control-plane] Marking the node addons-716851 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 09:30:37.884201  300568 kubeadm.go:319] [bootstrap-token] Using token: f8zdo0.kdi9csoupsb3e1zd
	I1227 09:30:37.887310  300568 out.go:252]   - Configuring RBAC rules ...
	I1227 09:30:37.887435  300568 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 09:30:37.895658  300568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 09:30:37.910570  300568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 09:30:37.914682  300568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 09:30:37.919605  300568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 09:30:37.924171  300568 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 09:30:38.204741  300568 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 09:30:38.631961  300568 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 09:30:39.204764  300568 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 09:30:39.206028  300568 kubeadm.go:319] 
	I1227 09:30:39.206119  300568 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 09:30:39.206138  300568 kubeadm.go:319] 
	I1227 09:30:39.206217  300568 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 09:30:39.206227  300568 kubeadm.go:319] 
	I1227 09:30:39.206253  300568 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 09:30:39.206317  300568 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 09:30:39.206389  300568 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 09:30:39.206401  300568 kubeadm.go:319] 
	I1227 09:30:39.206457  300568 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 09:30:39.206466  300568 kubeadm.go:319] 
	I1227 09:30:39.206515  300568 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 09:30:39.206524  300568 kubeadm.go:319] 
	I1227 09:30:39.206576  300568 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 09:30:39.206653  300568 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 09:30:39.206727  300568 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 09:30:39.206739  300568 kubeadm.go:319] 
	I1227 09:30:39.206825  300568 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 09:30:39.206907  300568 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 09:30:39.206915  300568 kubeadm.go:319] 
	I1227 09:30:39.206998  300568 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f8zdo0.kdi9csoupsb3e1zd \
	I1227 09:30:39.207109  300568 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 \
	I1227 09:30:39.207133  300568 kubeadm.go:319] 	--control-plane 
	I1227 09:30:39.207143  300568 kubeadm.go:319] 
	I1227 09:30:39.207228  300568 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 09:30:39.207238  300568 kubeadm.go:319] 
	I1227 09:30:39.207320  300568 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f8zdo0.kdi9csoupsb3e1zd \
	I1227 09:30:39.207427  300568 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 
	I1227 09:30:39.210835  300568 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:30:39.211252  300568 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 09:30:39.211367  300568 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:30:39.211389  300568 cni.go:84] Creating CNI manager for ""
	I1227 09:30:39.211398  300568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:30:39.214560  300568 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 09:30:39.217698  300568 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 09:30:39.221801  300568 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 09:30:39.221822  300568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 09:30:39.235029  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 09:30:39.515606  300568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 09:30:39.515743  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:30:39.515834  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-716851 minikube.k8s.io/updated_at=2025_12_27T09_30_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=addons-716851 minikube.k8s.io/primary=true
	I1227 09:30:39.688028  300568 ops.go:34] apiserver oom_adj: -16
	I1227 09:30:39.688152  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:30:40.189060  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:30:40.688290  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:30:41.188278  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:30:41.688834  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:30:42.189253  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:30:42.688648  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:30:43.188277  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:30:43.688738  300568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 09:30:43.794510  300568 kubeadm.go:1114] duration metric: took 4.278811688s to wait for elevateKubeSystemPrivileges
	I1227 09:30:43.794539  300568 kubeadm.go:403] duration metric: took 18.974002499s to StartCluster
	I1227 09:30:43.794557  300568 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:43.794674  300568 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:30:43.795046  300568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:30:43.795232  300568 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:30:43.795414  300568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 09:30:43.795669  300568 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:30:43.795710  300568 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1227 09:30:43.795811  300568 addons.go:70] Setting yakd=true in profile "addons-716851"
	I1227 09:30:43.795831  300568 addons.go:239] Setting addon yakd=true in "addons-716851"
	I1227 09:30:43.795854  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.796400  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.796780  300568 addons.go:70] Setting metrics-server=true in profile "addons-716851"
	I1227 09:30:43.796804  300568 addons.go:239] Setting addon metrics-server=true in "addons-716851"
	I1227 09:30:43.796828  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.796970  300568 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-716851"
	I1227 09:30:43.796997  300568 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-716851"
	I1227 09:30:43.797027  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.797248  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.797486  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.799579  300568 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-716851"
	I1227 09:30:43.799866  300568 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-716851"
	I1227 09:30:43.799912  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.800061  300568 addons.go:70] Setting registry=true in profile "addons-716851"
	I1227 09:30:43.800101  300568 addons.go:239] Setting addon registry=true in "addons-716851"
	I1227 09:30:43.800264  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.800448  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.802279  300568 addons.go:70] Setting registry-creds=true in profile "addons-716851"
	I1227 09:30:43.802320  300568 addons.go:239] Setting addon registry-creds=true in "addons-716851"
	I1227 09:30:43.802355  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.802837  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.799768  300568 addons.go:70] Setting cloud-spanner=true in profile "addons-716851"
	I1227 09:30:43.811383  300568 addons.go:239] Setting addon cloud-spanner=true in "addons-716851"
	I1227 09:30:43.811435  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.811918  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.815886  300568 addons.go:70] Setting storage-provisioner=true in profile "addons-716851"
	I1227 09:30:43.816050  300568 addons.go:239] Setting addon storage-provisioner=true in "addons-716851"
	I1227 09:30:43.816103  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.816816  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.799783  300568 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-716851"
	I1227 09:30:43.825518  300568 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-716851"
	I1227 09:30:43.825556  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.799791  300568 addons.go:70] Setting default-storageclass=true in profile "addons-716851"
	I1227 09:30:43.829912  300568 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-716851"
	I1227 09:30:43.830250  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.799795  300568 addons.go:70] Setting gcp-auth=true in profile "addons-716851"
	I1227 09:30:43.830337  300568 mustload.go:66] Loading cluster: addons-716851
	I1227 09:30:43.830519  300568 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:30:43.830739  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.831082  300568 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-716851"
	I1227 09:30:43.831120  300568 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-716851"
	I1227 09:30:43.831433  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.799799  300568 addons.go:70] Setting ingress=true in profile "addons-716851"
	I1227 09:30:43.844065  300568 addons.go:239] Setting addon ingress=true in "addons-716851"
	I1227 09:30:43.844159  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.844697  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.854417  300568 addons.go:70] Setting volcano=true in profile "addons-716851"
	I1227 09:30:43.854454  300568 addons.go:239] Setting addon volcano=true in "addons-716851"
	I1227 09:30:43.854490  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.854997  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.799803  300568 addons.go:70] Setting ingress-dns=true in profile "addons-716851"
	I1227 09:30:43.855295  300568 addons.go:239] Setting addon ingress-dns=true in "addons-716851"
	I1227 09:30:43.799814  300568 addons.go:70] Setting inspektor-gadget=true in profile "addons-716851"
	I1227 09:30:43.855343  300568 addons.go:239] Setting addon inspektor-gadget=true in "addons-716851"
	I1227 09:30:43.855363  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.799851  300568 out.go:179] * Verifying Kubernetes components...
	I1227 09:30:43.872020  300568 addons.go:70] Setting volumesnapshots=true in profile "addons-716851"
	I1227 09:30:43.872049  300568 addons.go:239] Setting addon volumesnapshots=true in "addons-716851"
	I1227 09:30:43.872082  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.872685  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.892673  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.943156  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:43.943670  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.958825  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.971478  300568 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I1227 09:30:43.977734  300568 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1227 09:30:43.977825  300568 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1227 09:30:43.977956  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:43.986605  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:43.996637  300568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:30:44.059746  300568 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1227 09:30:44.064170  300568 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1227 09:30:44.064266  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1227 09:30:44.064568  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.068147  300568 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1227 09:30:44.076162  300568 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1227 09:30:44.076245  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1227 09:30:44.076341  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.097510  300568 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1227 09:30:44.104983  300568 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1227 09:30:44.105007  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1227 09:30:44.105072  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.124158  300568 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-716851"
	I1227 09:30:44.124822  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:44.125389  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:44.137535  300568 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1227 09:30:44.140401  300568 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1227 09:30:44.140423  300568 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1227 09:30:44.140500  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.157074  300568 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:30:44.164994  300568 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:30:44.165027  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:30:44.165150  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.168710  300568 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 09:30:44.171868  300568 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1227 09:30:44.174677  300568 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I1227 09:30:44.183091  300568 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1227 09:30:44.183140  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1227 09:30:44.183208  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	W1227 09:30:44.216671  300568 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1227 09:30:44.218239  300568 addons.go:239] Setting addon default-storageclass=true in "addons-716851"
	I1227 09:30:44.218276  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:44.218883  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:44.232104  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:44.234015  300568 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 09:30:44.234621  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.239759  300568 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1227 09:30:44.246725  300568 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1227 09:30:44.246949  300568 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1227 09:30:44.247512  300568 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1227 09:30:44.268953  300568 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1227 09:30:44.269021  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1227 09:30:44.269153  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.268835  300568 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1227 09:30:44.288236  300568 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1227 09:30:44.288323  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.300779  300568 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1227 09:30:44.300860  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I1227 09:30:44.300971  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.328619  300568 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1227 09:30:44.333979  300568 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1227 09:30:44.334918  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.342463  300568 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1227 09:30:44.342489  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1227 09:30:44.342572  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.377683  300568 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1227 09:30:44.380360  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.395858  300568 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1227 09:30:44.396562  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.397906  300568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 09:30:44.398199  300568 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:30:44.398212  300568 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:30:44.398271  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.398564  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.400025  300568 out.go:179]   - Using image docker.io/registry:3.0.0
	I1227 09:30:44.402987  300568 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1227 09:30:44.403109  300568 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1227 09:30:44.403120  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1227 09:30:44.403218  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.418563  300568 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1227 09:30:44.424095  300568 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1227 09:30:44.428212  300568 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1227 09:30:44.431471  300568 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1227 09:30:44.431499  300568 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1227 09:30:44.431601  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.439819  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.440976  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.442617  300568 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1227 09:30:44.448810  300568 out.go:179]   - Using image docker.io/busybox:stable
	I1227 09:30:44.453187  300568 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1227 09:30:44.453214  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1227 09:30:44.453290  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:44.464268  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.520123  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.541171  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.541246  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.544452  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	W1227 09:30:44.549398  300568 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1227 09:30:44.549441  300568 retry.go:84] will retry after 400ms: ssh: handshake failed: EOF
	I1227 09:30:44.560297  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.577948  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.581137  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:44.587223  300568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:30:45.034048  300568 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1227 09:30:45.034078  300568 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1227 09:30:45.374686  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:30:45.391041  300568 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1227 09:30:45.391068  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1227 09:30:45.407338  300568 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1227 09:30:45.407366  300568 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1227 09:30:45.456685  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1227 09:30:45.461678  300568 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1227 09:30:45.461714  300568 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1227 09:30:45.468854  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1227 09:30:45.499565  300568 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1227 09:30:45.499595  300568 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1227 09:30:45.506691  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1227 09:30:45.523992  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1227 09:30:45.526798  300568 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1227 09:30:45.526826  300568 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1227 09:30:45.598384  300568 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1227 09:30:45.598427  300568 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1227 09:30:45.603831  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1227 09:30:45.625979  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1227 09:30:45.673076  300568 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1227 09:30:45.673105  300568 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1227 09:30:45.686874  300568 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1227 09:30:45.686903  300568 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1227 09:30:45.717788  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1227 09:30:45.768498  300568 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1227 09:30:45.768537  300568 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1227 09:30:45.812496  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:30:45.871681  300568 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1227 09:30:45.871710  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1227 09:30:45.996052  300568 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1227 09:30:45.996088  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I1227 09:30:46.056886  300568 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1227 09:30:46.056935  300568 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1227 09:30:46.167139  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1227 09:30:46.187627  300568 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1227 09:30:46.187656  300568 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1227 09:30:46.217590  300568 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 09:30:46.217619  300568 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1227 09:30:46.231520  300568 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1227 09:30:46.231562  300568 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1227 09:30:46.279935  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1227 09:30:46.326125  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1227 09:30:46.423818  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 09:30:46.457977  300568 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1227 09:30:46.458020  300568 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1227 09:30:46.475309  300568 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1227 09:30:46.475351  300568 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1227 09:30:46.645126  300568 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1227 09:30:46.645153  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1227 09:30:46.713754  300568 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 09:30:46.713778  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1227 09:30:46.958994  300568 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1227 09:30:46.959034  300568 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1227 09:30:47.108252  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 09:30:47.170022  300568 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1227 09:30:47.170050  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1227 09:30:47.229582  300568 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.642323246s)
	I1227 09:30:47.229645  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.85492208s)
	I1227 09:30:47.229944  300568 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.832011764s)
	I1227 09:30:47.229964  300568 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1227 09:30:47.230544  300568 node_ready.go:35] waiting up to 6m0s for node "addons-716851" to be "Ready" ...
	I1227 09:30:47.642357  300568 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1227 09:30:47.642377  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1227 09:30:47.721101  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.264375814s)
	I1227 09:30:47.735517  300568 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-716851" context rescaled to 1 replicas
	I1227 09:30:47.976736  300568 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1227 09:30:47.976762  300568 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1227 09:30:48.136199  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1227 09:30:49.239154  300568 node_ready.go:57] node "addons-716851" has "Ready":"False" status (will retry)
	I1227 09:30:49.853499  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.384600835s)
	I1227 09:30:49.853557  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.346841227s)
	I1227 09:30:49.853623  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.329607846s)
	I1227 09:30:49.853801  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.249944472s)
	I1227 09:30:49.853836  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.227824815s)
	I1227 09:30:49.853876  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.136063721s)
	I1227 09:30:50.347754  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.535195473s)
	I1227 09:30:51.222040  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.05485424s)
	I1227 09:30:51.222530  300568 addons.go:495] Verifying addon ingress=true in "addons-716851"
	I1227 09:30:51.222196  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.942195702s)
	I1227 09:30:51.222243  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.896089595s)
	I1227 09:30:51.222957  300568 addons.go:495] Verifying addon registry=true in "addons-716851"
	I1227 09:30:51.222304  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.79845716s)
	I1227 09:30:51.223256  300568 addons.go:495] Verifying addon metrics-server=true in "addons-716851"
	I1227 09:30:51.222385  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.114096658s)
	W1227 09:30:51.223300  300568 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1227 09:30:51.223322  300568 retry.go:84] will retry after 200ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1227 09:30:51.225769  300568 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-716851 service yakd-dashboard -n yakd-dashboard
	
	I1227 09:30:51.225783  300568 out.go:179] * Verifying registry addon...
	I1227 09:30:51.225876  300568 out.go:179] * Verifying ingress addon...
	I1227 09:30:51.230464  300568 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1227 09:30:51.230579  300568 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1227 09:30:51.239582  300568 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1227 09:30:51.239659  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:51.240153  300568 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1227 09:30:51.240214  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1227 09:30:51.240597  300568 node_ready.go:57] node "addons-716851" has "Ready":"False" status (will retry)
	I1227 09:30:51.453722  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1227 09:30:51.508626  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.372338146s)
	I1227 09:30:51.508747  300568 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-716851"
	I1227 09:30:51.512019  300568 out.go:179] * Verifying csi-hostpath-driver addon...
	I1227 09:30:51.515869  300568 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1227 09:30:51.522162  300568 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1227 09:30:51.522249  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:51.737882  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:51.738266  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:51.849881  300568 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1227 09:30:51.849978  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:51.868216  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:51.981416  300568 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1227 09:30:51.995228  300568 addons.go:239] Setting addon gcp-auth=true in "addons-716851"
	I1227 09:30:51.995276  300568 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:30:51.995736  300568 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:30:52.033320  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:52.034891  300568 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1227 09:30:52.034949  300568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:30:52.052990  300568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:30:52.235378  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:52.235568  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:52.518758  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:52.734939  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:52.735232  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:53.019505  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:53.233952  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:53.234699  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:53.518982  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1227 09:30:53.735212  300568 node_ready.go:57] node "addons-716851" has "Ready":"False" status (will retry)
	I1227 09:30:53.736236  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:53.736352  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:54.019798  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:54.212767  300568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.758924251s)
	I1227 09:30:54.212832  300568 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.177920716s)
	I1227 09:30:54.216061  300568 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1227 09:30:54.219130  300568 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1227 09:30:54.222004  300568 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1227 09:30:54.222036  300568 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1227 09:30:54.237261  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:54.237458  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:54.238330  300568 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1227 09:30:54.238349  300568 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1227 09:30:54.251633  300568 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1227 09:30:54.251657  300568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1227 09:30:54.264939  300568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1227 09:30:54.519218  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:54.750788  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:54.751699  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:54.751799  300568 addons.go:495] Verifying addon gcp-auth=true in "addons-716851"
	I1227 09:30:54.754682  300568 out.go:179] * Verifying gcp-auth addon...
	I1227 09:30:54.758688  300568 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1227 09:30:54.765194  300568 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1227 09:30:54.765220  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:55.021200  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:55.234859  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:55.235016  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:55.262333  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:55.519284  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:55.735046  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:55.735143  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:55.761887  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:56.019321  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:56.234880  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:56.235141  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1227 09:30:56.235822  300568 node_ready.go:57] node "addons-716851" has "Ready":"False" status (will retry)
	I1227 09:30:56.261757  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:56.519776  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:56.742467  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:56.742682  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:56.762622  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:57.019773  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:57.234339  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:57.234872  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:57.261791  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:57.519203  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:57.734715  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:57.734871  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:57.834531  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:58.019856  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:58.235195  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:58.235611  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1227 09:30:58.236166  300568 node_ready.go:57] node "addons-716851" has "Ready":"False" status (will retry)
	I1227 09:30:58.262324  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:58.559521  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:58.738576  300568 node_ready.go:49] node "addons-716851" is "Ready"
	I1227 09:30:58.738605  300568 node_ready.go:38] duration metric: took 11.508042347s for node "addons-716851" to be "Ready" ...
	I1227 09:30:58.738618  300568 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:30:58.738675  300568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:30:58.760339  300568 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1227 09:30:58.760359  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:58.773239  300568 api_server.go:72] duration metric: took 14.97797214s to wait for apiserver process to appear ...
	I1227 09:30:58.773267  300568 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:30:58.773288  300568 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1227 09:30:58.791596  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:58.791752  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:58.798031  300568 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1227 09:30:58.801189  300568 api_server.go:141] control plane version: v1.35.0
	I1227 09:30:58.801223  300568 api_server.go:131] duration metric: took 27.947133ms to wait for apiserver health ...
	I1227 09:30:58.801232  300568 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:30:58.908108  300568 system_pods.go:59] 19 kube-system pods found
	I1227 09:30:58.908148  300568 system_pods.go:61] "coredns-7d764666f9-kwhzw" [10cab517-7056-4032-9a2a-7151fb2b0853] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:30:58.908156  300568 system_pods.go:61] "csi-hostpath-attacher-0" [8e6cf719-45d3-40d3-ae6c-bc29775dc008] Pending
	I1227 09:30:58.908162  300568 system_pods.go:61] "csi-hostpath-resizer-0" [6de6b4b6-cd58-4697-bc25-e0d77e2c233e] Pending
	I1227 09:30:58.908166  300568 system_pods.go:61] "csi-hostpathplugin-htnqp" [4328eb64-326d-4076-9452-679f8834938d] Pending
	I1227 09:30:58.908170  300568 system_pods.go:61] "etcd-addons-716851" [c135e2c8-8da9-41fc-b8b2-f83f74ca9de4] Running
	I1227 09:30:58.908175  300568 system_pods.go:61] "kindnet-xjkr6" [f3cdd174-702c-4a1b-ab53-8f71d8bd7f95] Running
	I1227 09:30:58.908180  300568 system_pods.go:61] "kube-apiserver-addons-716851" [225aed22-27d1-4738-8fb1-3cdb112f7f54] Running
	I1227 09:30:58.908185  300568 system_pods.go:61] "kube-controller-manager-addons-716851" [dc94644b-43c1-4820-8e8a-db0546489aee] Running
	I1227 09:30:58.908192  300568 system_pods.go:61] "kube-ingress-dns-minikube" [2b49da06-f8d8-4b73-941f-d1ff602c91d9] Pending
	I1227 09:30:58.908199  300568 system_pods.go:61] "kube-proxy-vlhc4" [825dcf72-a4d5-44ee-9857-6a1de0d97e51] Running
	I1227 09:30:58.908204  300568 system_pods.go:61] "kube-scheduler-addons-716851" [3c66988b-be2e-4b57-b2c3-a72023b79bfa] Running
	I1227 09:30:58.908219  300568 system_pods.go:61] "metrics-server-5778bb4788-wmzpf" [5d4fac15-b100-4825-93f5-1f7955b9f601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:30:58.908224  300568 system_pods.go:61] "nvidia-device-plugin-daemonset-g8pzg" [f230629c-24b3-4233-a77a-47f16f582bb6] Pending
	I1227 09:30:58.908234  300568 system_pods.go:61] "registry-788cd7d5bc-sft95" [a8eceb49-0be8-43ec-acc2-d7ba8499ec65] Pending
	I1227 09:30:58.908239  300568 system_pods.go:61] "registry-creds-567fb78d95-whf4f" [61076c10-cdf2-489b-8666-bdec472d6e03] Pending
	I1227 09:30:58.908243  300568 system_pods.go:61] "registry-proxy-dqhx6" [8db0accc-ca73-43cb-8d99-605b48ac65d1] Pending
	I1227 09:30:58.908247  300568 system_pods.go:61] "snapshot-controller-6588d87457-7g6fc" [8a5aede0-3942-417b-9148-c2fc6f282684] Pending
	I1227 09:30:58.908257  300568 system_pods.go:61] "snapshot-controller-6588d87457-s2mx4" [3697f16b-44df-4f0c-9d93-38b9ba611d4a] Pending
	I1227 09:30:58.908261  300568 system_pods.go:61] "storage-provisioner" [c0158230-b918-4deb-989b-464c128ec425] Pending
	I1227 09:30:58.908266  300568 system_pods.go:74] duration metric: took 107.028607ms to wait for pod list to return data ...
	I1227 09:30:58.908274  300568 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:30:58.972768  300568 default_sa.go:45] found service account: "default"
	I1227 09:30:58.972797  300568 default_sa.go:55] duration metric: took 64.514243ms for default service account to be created ...
	I1227 09:30:58.972808  300568 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:30:59.033324  300568 system_pods.go:86] 19 kube-system pods found
	I1227 09:30:59.033365  300568 system_pods.go:89] "coredns-7d764666f9-kwhzw" [10cab517-7056-4032-9a2a-7151fb2b0853] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:30:59.033374  300568 system_pods.go:89] "csi-hostpath-attacher-0" [8e6cf719-45d3-40d3-ae6c-bc29775dc008] Pending
	I1227 09:30:59.033383  300568 system_pods.go:89] "csi-hostpath-resizer-0" [6de6b4b6-cd58-4697-bc25-e0d77e2c233e] Pending
	I1227 09:30:59.033388  300568 system_pods.go:89] "csi-hostpathplugin-htnqp" [4328eb64-326d-4076-9452-679f8834938d] Pending
	I1227 09:30:59.033392  300568 system_pods.go:89] "etcd-addons-716851" [c135e2c8-8da9-41fc-b8b2-f83f74ca9de4] Running
	I1227 09:30:59.033398  300568 system_pods.go:89] "kindnet-xjkr6" [f3cdd174-702c-4a1b-ab53-8f71d8bd7f95] Running
	I1227 09:30:59.033402  300568 system_pods.go:89] "kube-apiserver-addons-716851" [225aed22-27d1-4738-8fb1-3cdb112f7f54] Running
	I1227 09:30:59.033412  300568 system_pods.go:89] "kube-controller-manager-addons-716851" [dc94644b-43c1-4820-8e8a-db0546489aee] Running
	I1227 09:30:59.033418  300568 system_pods.go:89] "kube-ingress-dns-minikube" [2b49da06-f8d8-4b73-941f-d1ff602c91d9] Pending
	I1227 09:30:59.033425  300568 system_pods.go:89] "kube-proxy-vlhc4" [825dcf72-a4d5-44ee-9857-6a1de0d97e51] Running
	I1227 09:30:59.033430  300568 system_pods.go:89] "kube-scheduler-addons-716851" [3c66988b-be2e-4b57-b2c3-a72023b79bfa] Running
	I1227 09:30:59.033448  300568 system_pods.go:89] "metrics-server-5778bb4788-wmzpf" [5d4fac15-b100-4825-93f5-1f7955b9f601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:30:59.033455  300568 system_pods.go:89] "nvidia-device-plugin-daemonset-g8pzg" [f230629c-24b3-4233-a77a-47f16f582bb6] Pending
	I1227 09:30:59.033466  300568 system_pods.go:89] "registry-788cd7d5bc-sft95" [a8eceb49-0be8-43ec-acc2-d7ba8499ec65] Pending
	I1227 09:30:59.033471  300568 system_pods.go:89] "registry-creds-567fb78d95-whf4f" [61076c10-cdf2-489b-8666-bdec472d6e03] Pending
	I1227 09:30:59.033476  300568 system_pods.go:89] "registry-proxy-dqhx6" [8db0accc-ca73-43cb-8d99-605b48ac65d1] Pending
	I1227 09:30:59.033491  300568 system_pods.go:89] "snapshot-controller-6588d87457-7g6fc" [8a5aede0-3942-417b-9148-c2fc6f282684] Pending
	I1227 09:30:59.033495  300568 system_pods.go:89] "snapshot-controller-6588d87457-s2mx4" [3697f16b-44df-4f0c-9d93-38b9ba611d4a] Pending
	I1227 09:30:59.033499  300568 system_pods.go:89] "storage-provisioner" [c0158230-b918-4deb-989b-464c128ec425] Pending
	I1227 09:30:59.033521  300568 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 09:30:59.033999  300568 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1227 09:30:59.034020  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:59.251939  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:59.252302  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:59.262171  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:59.317844  300568 system_pods.go:86] 19 kube-system pods found
	I1227 09:30:59.317878  300568 system_pods.go:89] "coredns-7d764666f9-kwhzw" [10cab517-7056-4032-9a2a-7151fb2b0853] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:30:59.317886  300568 system_pods.go:89] "csi-hostpath-attacher-0" [8e6cf719-45d3-40d3-ae6c-bc29775dc008] Pending
	I1227 09:30:59.317891  300568 system_pods.go:89] "csi-hostpath-resizer-0" [6de6b4b6-cd58-4697-bc25-e0d77e2c233e] Pending
	I1227 09:30:59.317898  300568 system_pods.go:89] "csi-hostpathplugin-htnqp" [4328eb64-326d-4076-9452-679f8834938d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:30:59.317904  300568 system_pods.go:89] "etcd-addons-716851" [c135e2c8-8da9-41fc-b8b2-f83f74ca9de4] Running
	I1227 09:30:59.317910  300568 system_pods.go:89] "kindnet-xjkr6" [f3cdd174-702c-4a1b-ab53-8f71d8bd7f95] Running
	I1227 09:30:59.317915  300568 system_pods.go:89] "kube-apiserver-addons-716851" [225aed22-27d1-4738-8fb1-3cdb112f7f54] Running
	I1227 09:30:59.317920  300568 system_pods.go:89] "kube-controller-manager-addons-716851" [dc94644b-43c1-4820-8e8a-db0546489aee] Running
	I1227 09:30:59.317927  300568 system_pods.go:89] "kube-ingress-dns-minikube" [2b49da06-f8d8-4b73-941f-d1ff602c91d9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:30:59.317935  300568 system_pods.go:89] "kube-proxy-vlhc4" [825dcf72-a4d5-44ee-9857-6a1de0d97e51] Running
	I1227 09:30:59.317945  300568 system_pods.go:89] "kube-scheduler-addons-716851" [3c66988b-be2e-4b57-b2c3-a72023b79bfa] Running
	I1227 09:30:59.317952  300568 system_pods.go:89] "metrics-server-5778bb4788-wmzpf" [5d4fac15-b100-4825-93f5-1f7955b9f601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:30:59.317967  300568 system_pods.go:89] "nvidia-device-plugin-daemonset-g8pzg" [f230629c-24b3-4233-a77a-47f16f582bb6] Pending
	I1227 09:30:59.317975  300568 system_pods.go:89] "registry-788cd7d5bc-sft95" [a8eceb49-0be8-43ec-acc2-d7ba8499ec65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:30:59.317987  300568 system_pods.go:89] "registry-creds-567fb78d95-whf4f" [61076c10-cdf2-489b-8666-bdec472d6e03] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:30:59.317993  300568 system_pods.go:89] "registry-proxy-dqhx6" [8db0accc-ca73-43cb-8d99-605b48ac65d1] Pending
	I1227 09:30:59.318001  300568 system_pods.go:89] "snapshot-controller-6588d87457-7g6fc" [8a5aede0-3942-417b-9148-c2fc6f282684] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:30:59.318010  300568 system_pods.go:89] "snapshot-controller-6588d87457-s2mx4" [3697f16b-44df-4f0c-9d93-38b9ba611d4a] Pending
	I1227 09:30:59.318017  300568 system_pods.go:89] "storage-provisioner" [c0158230-b918-4deb-989b-464c128ec425] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:30:59.522289  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:30:59.574767  300568 system_pods.go:86] 19 kube-system pods found
	I1227 09:30:59.574806  300568 system_pods.go:89] "coredns-7d764666f9-kwhzw" [10cab517-7056-4032-9a2a-7151fb2b0853] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:30:59.574816  300568 system_pods.go:89] "csi-hostpath-attacher-0" [8e6cf719-45d3-40d3-ae6c-bc29775dc008] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:30:59.574831  300568 system_pods.go:89] "csi-hostpath-resizer-0" [6de6b4b6-cd58-4697-bc25-e0d77e2c233e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:30:59.574839  300568 system_pods.go:89] "csi-hostpathplugin-htnqp" [4328eb64-326d-4076-9452-679f8834938d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:30:59.574845  300568 system_pods.go:89] "etcd-addons-716851" [c135e2c8-8da9-41fc-b8b2-f83f74ca9de4] Running
	I1227 09:30:59.574855  300568 system_pods.go:89] "kindnet-xjkr6" [f3cdd174-702c-4a1b-ab53-8f71d8bd7f95] Running
	I1227 09:30:59.574860  300568 system_pods.go:89] "kube-apiserver-addons-716851" [225aed22-27d1-4738-8fb1-3cdb112f7f54] Running
	I1227 09:30:59.574870  300568 system_pods.go:89] "kube-controller-manager-addons-716851" [dc94644b-43c1-4820-8e8a-db0546489aee] Running
	I1227 09:30:59.574878  300568 system_pods.go:89] "kube-ingress-dns-minikube" [2b49da06-f8d8-4b73-941f-d1ff602c91d9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:30:59.574888  300568 system_pods.go:89] "kube-proxy-vlhc4" [825dcf72-a4d5-44ee-9857-6a1de0d97e51] Running
	I1227 09:30:59.574894  300568 system_pods.go:89] "kube-scheduler-addons-716851" [3c66988b-be2e-4b57-b2c3-a72023b79bfa] Running
	I1227 09:30:59.574903  300568 system_pods.go:89] "metrics-server-5778bb4788-wmzpf" [5d4fac15-b100-4825-93f5-1f7955b9f601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:30:59.574916  300568 system_pods.go:89] "nvidia-device-plugin-daemonset-g8pzg" [f230629c-24b3-4233-a77a-47f16f582bb6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 09:30:59.574923  300568 system_pods.go:89] "registry-788cd7d5bc-sft95" [a8eceb49-0be8-43ec-acc2-d7ba8499ec65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:30:59.574929  300568 system_pods.go:89] "registry-creds-567fb78d95-whf4f" [61076c10-cdf2-489b-8666-bdec472d6e03] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:30:59.574936  300568 system_pods.go:89] "registry-proxy-dqhx6" [8db0accc-ca73-43cb-8d99-605b48ac65d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 09:30:59.574946  300568 system_pods.go:89] "snapshot-controller-6588d87457-7g6fc" [8a5aede0-3942-417b-9148-c2fc6f282684] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:30:59.574955  300568 system_pods.go:89] "snapshot-controller-6588d87457-s2mx4" [3697f16b-44df-4f0c-9d93-38b9ba611d4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:30:59.574965  300568 system_pods.go:89] "storage-provisioner" [c0158230-b918-4deb-989b-464c128ec425] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 09:30:59.737592  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:30:59.737740  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:30:59.834443  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:30:59.981082  300568 system_pods.go:86] 19 kube-system pods found
	I1227 09:30:59.981118  300568 system_pods.go:89] "coredns-7d764666f9-kwhzw" [10cab517-7056-4032-9a2a-7151fb2b0853] Running
	I1227 09:30:59.981130  300568 system_pods.go:89] "csi-hostpath-attacher-0" [8e6cf719-45d3-40d3-ae6c-bc29775dc008] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1227 09:30:59.981144  300568 system_pods.go:89] "csi-hostpath-resizer-0" [6de6b4b6-cd58-4697-bc25-e0d77e2c233e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1227 09:30:59.981153  300568 system_pods.go:89] "csi-hostpathplugin-htnqp" [4328eb64-326d-4076-9452-679f8834938d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1227 09:30:59.981159  300568 system_pods.go:89] "etcd-addons-716851" [c135e2c8-8da9-41fc-b8b2-f83f74ca9de4] Running
	I1227 09:30:59.981164  300568 system_pods.go:89] "kindnet-xjkr6" [f3cdd174-702c-4a1b-ab53-8f71d8bd7f95] Running
	I1227 09:30:59.981170  300568 system_pods.go:89] "kube-apiserver-addons-716851" [225aed22-27d1-4738-8fb1-3cdb112f7f54] Running
	I1227 09:30:59.981180  300568 system_pods.go:89] "kube-controller-manager-addons-716851" [dc94644b-43c1-4820-8e8a-db0546489aee] Running
	I1227 09:30:59.981189  300568 system_pods.go:89] "kube-ingress-dns-minikube" [2b49da06-f8d8-4b73-941f-d1ff602c91d9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1227 09:30:59.981200  300568 system_pods.go:89] "kube-proxy-vlhc4" [825dcf72-a4d5-44ee-9857-6a1de0d97e51] Running
	I1227 09:30:59.981205  300568 system_pods.go:89] "kube-scheduler-addons-716851" [3c66988b-be2e-4b57-b2c3-a72023b79bfa] Running
	I1227 09:30:59.981212  300568 system_pods.go:89] "metrics-server-5778bb4788-wmzpf" [5d4fac15-b100-4825-93f5-1f7955b9f601] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:30:59.981225  300568 system_pods.go:89] "nvidia-device-plugin-daemonset-g8pzg" [f230629c-24b3-4233-a77a-47f16f582bb6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1227 09:30:59.981232  300568 system_pods.go:89] "registry-788cd7d5bc-sft95" [a8eceb49-0be8-43ec-acc2-d7ba8499ec65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1227 09:30:59.981240  300568 system_pods.go:89] "registry-creds-567fb78d95-whf4f" [61076c10-cdf2-489b-8666-bdec472d6e03] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1227 09:30:59.981247  300568 system_pods.go:89] "registry-proxy-dqhx6" [8db0accc-ca73-43cb-8d99-605b48ac65d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1227 09:30:59.981256  300568 system_pods.go:89] "snapshot-controller-6588d87457-7g6fc" [8a5aede0-3942-417b-9148-c2fc6f282684] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:30:59.981267  300568 system_pods.go:89] "snapshot-controller-6588d87457-s2mx4" [3697f16b-44df-4f0c-9d93-38b9ba611d4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1227 09:30:59.981277  300568 system_pods.go:89] "storage-provisioner" [c0158230-b918-4deb-989b-464c128ec425] Running
	I1227 09:30:59.981289  300568 system_pods.go:126] duration metric: took 1.008474992s to wait for k8s-apps to be running ...
	I1227 09:30:59.981300  300568 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:30:59.981358  300568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:30:59.999686  300568 system_svc.go:56] duration metric: took 18.37645ms WaitForService to wait for kubelet
	I1227 09:30:59.999727  300568 kubeadm.go:587] duration metric: took 16.204470897s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:30:59.999746  300568 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:31:00.002896  300568 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 09:31:00.002926  300568 node_conditions.go:123] node cpu capacity is 2
	I1227 09:31:00.002940  300568 node_conditions.go:105] duration metric: took 3.188109ms to run NodePressure ...
	I1227 09:31:00.002953  300568 start.go:242] waiting for startup goroutines ...
	I1227 09:31:00.077426  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:00.273491  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:00.291467  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:00.294343  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:00.520903  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:00.735404  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:00.735752  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:00.762609  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:01.019093  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:01.235264  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:01.235679  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:01.262036  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:01.519216  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:01.735580  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:01.736231  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:01.762143  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:02.019562  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:02.233572  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:02.234715  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:02.261739  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:02.521917  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:02.735902  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:02.736151  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:02.762298  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:03.019856  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:03.235324  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:03.235962  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:03.263803  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:03.520253  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:03.735237  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:03.735609  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:03.762535  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:04.020042  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:04.235731  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:04.236805  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:04.262187  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:04.519914  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:04.735530  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:04.735854  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:04.761523  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:05.019693  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:05.236068  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:05.236518  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:05.262836  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:05.519717  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:05.735455  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:05.735585  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:05.762379  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:06.020158  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:06.236522  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:06.236948  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:06.262249  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:06.520249  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:06.736608  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:06.737054  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:06.762050  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:07.020343  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:07.235061  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:07.235529  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:07.262576  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:07.520774  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:07.737539  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:07.737764  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:07.837261  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:08.020914  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:08.237509  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:08.237731  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:08.262619  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:08.520291  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:08.735831  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:08.736054  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:08.762403  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:09.019944  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:09.236692  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:09.237069  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:09.262323  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:09.520518  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:09.736099  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:09.740350  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:09.762706  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:10.022060  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:10.238379  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:10.239161  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:10.263016  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:10.519883  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:10.736322  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:10.736570  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:10.762625  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:11.019513  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:11.235640  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:11.235864  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:11.261917  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:11.520153  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:11.735031  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:11.735475  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:11.763115  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:12.020175  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:12.234645  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:12.236426  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:12.262289  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:12.519913  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:12.736977  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:12.737090  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:12.763056  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:13.020213  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:13.235109  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:13.235205  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:13.262100  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:13.519656  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:13.734321  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:13.734475  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:13.762480  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:14.022622  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:14.234764  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:14.234963  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:14.262119  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:14.520001  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:14.735475  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:14.735991  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:14.762356  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:15.048012  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:15.235931  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:15.236757  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:15.262120  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:15.520208  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:15.734552  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:15.735371  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:15.762066  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:16.019140  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:16.235643  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:16.236735  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:16.262127  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:16.522726  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:16.734295  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:16.734633  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:16.762627  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:17.019131  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:17.235601  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:17.236295  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:17.336729  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:17.520454  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:17.737076  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:17.737444  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:17.763096  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:18.020637  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:18.235389  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:18.236083  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:18.262317  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:18.519545  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:18.734717  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:18.734955  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1227 09:31:18.762574  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:19.023081  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:19.234741  300568 kapi.go:107] duration metric: took 28.004275641s to wait for kubernetes.io/minikube-addons=registry ...
	I1227 09:31:19.234963  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:19.263159  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:19.519923  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:19.734878  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:19.762576  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:20.023105  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:20.234731  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:20.262490  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:20.520476  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:20.733823  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:20.763038  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:21.031532  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:21.234876  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:21.262009  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:21.519715  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:21.734145  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:21.762485  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:22.019815  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:22.234065  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:22.261931  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:22.523491  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:22.734762  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:22.762344  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:23.044483  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:23.236382  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:23.263200  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:23.520172  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:23.734896  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:23.763214  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:24.020228  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:24.240773  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:24.261801  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:24.519752  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:24.734920  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:24.762330  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:25.021425  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:25.234844  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:25.267814  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:25.521105  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:25.734090  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:25.762268  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:26.020419  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:26.234738  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:26.262181  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:26.519854  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:26.734452  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:26.762263  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:27.020622  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:27.234623  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:27.262322  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:27.520904  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:27.734907  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:27.762473  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:28.020189  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:28.235742  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:28.262828  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:28.520973  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:28.734931  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:28.767168  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:29.020855  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:29.234205  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:29.261804  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:29.520293  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:29.735391  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:29.762270  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:30.026352  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:30.235032  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:30.262448  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:30.520993  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:30.735438  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:30.762263  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:31.019941  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:31.234630  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:31.262196  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:31.520460  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:31.734442  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:31.762238  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:32.020802  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:32.234397  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:32.262516  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:32.520158  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:32.734264  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:32.762054  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:33.019703  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:33.235232  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:33.262166  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:33.519472  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:33.733948  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:33.762215  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:34.019700  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:34.234759  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:34.261992  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:34.520172  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:34.734778  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:34.762418  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:35.029665  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:35.234670  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:35.262513  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:35.521074  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:35.734836  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:35.762358  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:36.029324  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:36.235522  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:36.263902  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:36.519552  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:36.733682  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:36.762689  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:37.026519  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:37.242304  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:37.314731  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:37.519457  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:37.734609  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:37.762721  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:38.021854  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:38.235483  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:38.264380  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:38.520101  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:38.734819  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:38.761898  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:39.020139  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:39.234344  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:39.262625  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:39.523554  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:39.733885  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:39.762207  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:40.021216  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:40.236676  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:40.261561  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:40.520531  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:40.733683  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:40.761620  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:41.019684  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:41.234269  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:41.261981  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:41.520501  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:41.734793  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:41.762147  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:42.034437  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:42.235132  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:42.262783  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:42.520758  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:42.746011  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:42.764679  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:43.019611  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:43.235670  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:43.263731  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:43.525384  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:43.742458  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:43.763328  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:44.020268  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:44.239383  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:44.265382  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:44.520832  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:44.734498  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:44.763333  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:45.028954  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:45.265012  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:45.279567  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:45.520828  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:45.734772  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:45.761704  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:46.021439  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:46.235069  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:46.262628  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:46.519932  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:46.734756  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:46.769901  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:47.019582  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:47.234691  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:47.262520  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:47.519998  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:47.734541  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:47.762560  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:48.020558  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:48.234220  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:48.262372  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:48.521828  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:48.733625  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:48.762161  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:49.020178  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:49.234752  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:49.261950  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:49.521465  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:49.733882  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:49.761715  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:50.020360  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1227 09:31:50.234426  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:50.270452  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:50.520381  300568 kapi.go:107] duration metric: took 59.004510896s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1227 09:31:50.733458  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:50.762252  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:51.234247  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:51.263393  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:51.734530  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:51.762260  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:52.233492  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:52.262265  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:52.734615  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:52.761571  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:53.234235  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:53.262153  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:53.734538  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:53.762368  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:54.234140  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:54.262472  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:54.733890  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:54.761844  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:55.234210  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:55.262087  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:55.734932  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:55.762313  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:56.235183  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:56.262561  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:56.734215  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:56.762048  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:57.234446  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:57.262681  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:57.734245  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:57.762429  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:58.234374  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:58.262289  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:58.734128  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:58.762047  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:59.234210  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:59.262091  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:31:59.734469  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:31:59.762401  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:00.235245  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:00.274874  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:00.735335  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:00.761989  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:01.234699  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:01.262226  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:01.734945  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:01.762113  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:02.235137  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:02.262175  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:02.733989  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:02.761474  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:03.234718  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:03.262053  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:03.735237  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:03.762261  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:04.235042  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:04.262191  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:04.733652  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:04.762351  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:05.233922  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:05.262011  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:05.734905  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:05.762211  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:06.234783  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:06.261911  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:06.734909  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:06.761814  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:07.234920  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:07.262134  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:07.735092  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:07.762344  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:08.234472  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:08.262488  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:08.733642  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:08.761593  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:09.235024  300568 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1227 09:32:09.262853  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:09.734609  300568 kapi.go:107] duration metric: took 1m18.504023818s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1227 09:32:09.762542  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:10.262644  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:10.817198  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:11.263906  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:11.762666  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:12.262442  300568 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1227 09:32:12.762243  300568 kapi.go:107] duration metric: took 1m18.003564149s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1227 09:32:12.765190  300568 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-716851 cluster.
	I1227 09:32:12.767870  300568 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1227 09:32:12.770783  300568 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1227 09:32:12.773893  300568 out.go:179] * Enabled addons: default-storageclass, amd-gpu-device-plugin, inspektor-gadget, registry-creds, cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, storage-provisioner, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1227 09:32:12.776823  300568 addons.go:530] duration metric: took 1m28.981100867s for enable addons: enabled=[default-storageclass amd-gpu-device-plugin inspektor-gadget registry-creds cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner-rancher storage-provisioner metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1227 09:32:12.776885  300568 start.go:247] waiting for cluster config update ...
	I1227 09:32:12.776914  300568 start.go:256] writing updated cluster config ...
	I1227 09:32:12.777225  300568 ssh_runner.go:195] Run: rm -f paused
	I1227 09:32:12.781911  300568 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:32:12.785500  300568 pod_ready.go:83] waiting for pod "coredns-7d764666f9-kwhzw" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:12.790552  300568 pod_ready.go:94] pod "coredns-7d764666f9-kwhzw" is "Ready"
	I1227 09:32:12.790638  300568 pod_ready.go:86] duration metric: took 5.106299ms for pod "coredns-7d764666f9-kwhzw" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:12.793093  300568 pod_ready.go:83] waiting for pod "etcd-addons-716851" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:12.797596  300568 pod_ready.go:94] pod "etcd-addons-716851" is "Ready"
	I1227 09:32:12.797625  300568 pod_ready.go:86] duration metric: took 4.505792ms for pod "etcd-addons-716851" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:12.800780  300568 pod_ready.go:83] waiting for pod "kube-apiserver-addons-716851" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:12.805853  300568 pod_ready.go:94] pod "kube-apiserver-addons-716851" is "Ready"
	I1227 09:32:12.805878  300568 pod_ready.go:86] duration metric: took 5.069795ms for pod "kube-apiserver-addons-716851" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:12.808431  300568 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-716851" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:13.185702  300568 pod_ready.go:94] pod "kube-controller-manager-addons-716851" is "Ready"
	I1227 09:32:13.185730  300568 pod_ready.go:86] duration metric: took 377.272309ms for pod "kube-controller-manager-addons-716851" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:13.386054  300568 pod_ready.go:83] waiting for pod "kube-proxy-vlhc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:13.785998  300568 pod_ready.go:94] pod "kube-proxy-vlhc4" is "Ready"
	I1227 09:32:13.786028  300568 pod_ready.go:86] duration metric: took 399.94335ms for pod "kube-proxy-vlhc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:13.986555  300568 pod_ready.go:83] waiting for pod "kube-scheduler-addons-716851" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:14.386392  300568 pod_ready.go:94] pod "kube-scheduler-addons-716851" is "Ready"
	I1227 09:32:14.386421  300568 pod_ready.go:86] duration metric: took 399.839466ms for pod "kube-scheduler-addons-716851" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:32:14.386434  300568 pod_ready.go:40] duration metric: took 1.604480846s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:32:14.439940  300568 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 09:32:14.443197  300568 out.go:203] 
	W1227 09:32:14.446124  300568 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 09:32:14.448965  300568 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 09:32:14.451826  300568 out.go:179] * Done! kubectl is now configured to use "addons-716851" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 09:32:43 addons-716851 crio[829]: time="2025-12-27T09:32:43.038954892Z" level=info msg="Started container" PID=5389 containerID=d03c3d78133944e9d7fe06ca886a389ab721d9cfe602c8b1669bdb6bfd66b5bf description=default/test-local-path/busybox id=92982fbb-6c85-4e9d-a276-661db2c2d79c name=/runtime.v1.RuntimeService/StartContainer sandboxID=4b21672f642cc0312c04c48abea4f6fae8764545ffaca117e04ab0c30daf156a
	Dec 27 09:32:44 addons-716851 crio[829]: time="2025-12-27T09:32:44.485154668Z" level=info msg="Stopping pod sandbox: 4b21672f642cc0312c04c48abea4f6fae8764545ffaca117e04ab0c30daf156a" id=0cdccb93-e53d-48bb-99e7-cc5d725661eb name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 27 09:32:44 addons-716851 crio[829]: time="2025-12-27T09:32:44.485419603Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:4b21672f642cc0312c04c48abea4f6fae8764545ffaca117e04ab0c30daf156a UID:b256bc01-6c7e-48a7-8f66-e045c5a0b526 NetNS:/var/run/netns/62ccd9b0-fd1e-4f79-ae57-2098207f9e8c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400142d6e8}] Aliases:map[]}"
	Dec 27 09:32:44 addons-716851 crio[829]: time="2025-12-27T09:32:44.485560953Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:32:44 addons-716851 crio[829]: time="2025-12-27T09:32:44.512913973Z" level=info msg="Stopped pod sandbox: 4b21672f642cc0312c04c48abea4f6fae8764545ffaca117e04ab0c30daf156a" id=0cdccb93-e53d-48bb-99e7-cc5d725661eb name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.121884318Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059/POD" id=3c8c4666-9200-44f8-aacf-9b4015381e8c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.121949656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.174814443Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059 Namespace:local-path-storage ID:388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb UID:33c3571f-a30b-44c0-9ba2-22e262791fb3 NetNS:/var/run/netns/3ea954a7-815d-418c-95c3-14e6230065cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400142c200}] Aliases:map[]}"
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.174857758Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059 to CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.19573907Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059 Namespace:local-path-storage ID:388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb UID:33c3571f-a30b-44c0-9ba2-22e262791fb3 NetNS:/var/run/netns/3ea954a7-815d-418c-95c3-14e6230065cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400142c200}] Aliases:map[]}"
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.196202298Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059 for CNI network kindnet (type=ptp)"
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.202625926Z" level=info msg="Ran pod sandbox 388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb with infra container: local-path-storage/helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059/POD" id=3c8c4666-9200-44f8-aacf-9b4015381e8c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.205023217Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=4f51e270-3b8c-40a1-a7db-930e44059140 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.209505078Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=eca77dc0-7905-44f3-a228-c13c48749f35 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.222020091Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059/helper-pod" id=cbb24cde-6ce5-475f-849e-233086f4c024 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.222287702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.232827643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.233359352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.282713668Z" level=info msg="Created container 895eafb626d7274adc113a5131b2ed85316a6ea2ace1745cf5500a62aa5bc1f6: local-path-storage/helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059/helper-pod" id=cbb24cde-6ce5-475f-849e-233086f4c024 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.28374186Z" level=info msg="Starting container: 895eafb626d7274adc113a5131b2ed85316a6ea2ace1745cf5500a62aa5bc1f6" id=3d64a73b-db4f-4b6b-9dcc-81823d5bc004 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:32:46 addons-716851 crio[829]: time="2025-12-27T09:32:46.285815352Z" level=info msg="Started container" PID=5484 containerID=895eafb626d7274adc113a5131b2ed85316a6ea2ace1745cf5500a62aa5bc1f6 description=local-path-storage/helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059/helper-pod id=3d64a73b-db4f-4b6b-9dcc-81823d5bc004 name=/runtime.v1.RuntimeService/StartContainer sandboxID=388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb
	Dec 27 09:32:47 addons-716851 crio[829]: time="2025-12-27T09:32:47.503091988Z" level=info msg="Stopping pod sandbox: 388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb" id=9682f28c-001d-4dd4-9c64-7ff1d5b6d3ea name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 27 09:32:47 addons-716851 crio[829]: time="2025-12-27T09:32:47.503376902Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059 Namespace:local-path-storage ID:388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb UID:33c3571f-a30b-44c0-9ba2-22e262791fb3 NetNS:/var/run/netns/3ea954a7-815d-418c-95c3-14e6230065cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400142ce10}] Aliases:map[]}"
	Dec 27 09:32:47 addons-716851 crio[829]: time="2025-12-27T09:32:47.50350834Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059 from CNI network \"kindnet\" (type=ptp)"
	Dec 27 09:32:47 addons-716851 crio[829]: time="2025-12-27T09:32:47.530789329Z" level=info msg="Stopped pod sandbox: 388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb" id=9682f28c-001d-4dd4-9c64-7ff1d5b6d3ea name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	895eafb626d72       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   388ee9fa53dbc       helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059   local-path-storage
	d03c3d7813394       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   4b21672f642cc       test-local-path                                              default
	3eeb0a5f14e66       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   a64990866fb49       helper-pod-create-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059   local-path-storage
	d48c160083597       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          9 seconds ago        Exited              registry-test                            0                   0f8376d376a7b       registry-test                                                default
	2c8766363118d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          30 seconds ago       Running             busybox                                  0                   ddff1c51e6239       busybox                                                      default
	f2c80a14449fe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 35 seconds ago       Running             gcp-auth                                 0                   4da5eb8c5ff13       gcp-auth-5bbcf684b5-vlrtj                                    gcp-auth
	1ba4c86c6f265       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             39 seconds ago       Running             controller                               0                   90345d195a9a4       ingress-nginx-controller-7847b5c79c-zbtqg                    ingress-nginx
	3aff66e9fa8d0       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          58 seconds ago       Running             csi-snapshotter                          0                   35de7a8f5c4e1       csi-hostpathplugin-htnqp                                     kube-system
	dca9a42c7a046       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          59 seconds ago       Running             csi-provisioner                          0                   35de7a8f5c4e1       csi-hostpathplugin-htnqp                                     kube-system
	0d85776b3afde       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago   Running             liveness-probe                           0                   35de7a8f5c4e1       csi-hostpathplugin-htnqp                                     kube-system
	8ec7b4bf4c834       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   35de7a8f5c4e1       csi-hostpathplugin-htnqp                                     kube-system
	84effb21b12c4       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   35de7a8f5c4e1       csi-hostpathplugin-htnqp                                     kube-system
	be287d4d96e95       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              patch                                    1                   f974be2078333       ingress-nginx-admission-patch-pmg49                          ingress-nginx
	792ed4232ad33       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            About a minute ago   Running             gadget                                   0                   536e8e5d45fc3       gadget-zv99m                                                 gadget
	1981e5d769fb2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   35de7a8f5c4e1       csi-hostpathplugin-htnqp                                     kube-system
	3fbcb3867c991       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   6525fcf579840       snapshot-controller-6588d87457-s2mx4                         kube-system
	94372f90138c6       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   3c9db08dde0ec       csi-hostpath-attacher-0                                      kube-system
	a1466188d1e09       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   516751855c7cd       ingress-nginx-admission-create-fbmj5                         ingress-nginx
	afe39d3c139a7       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   b530ca8692e81       local-path-provisioner-c44bcd496-jlf4d                       local-path-storage
	3f588dfa11bc0       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   26c4085fe7046       csi-hostpath-resizer-0                                       kube-system
	0443b7b8b5486       nvcr.io/nvidia/k8s-device-plugin@sha256:10b7b747520ba2314061b5b319d3b2766b9cec1fd9404109c607e85b30af6905                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   478f8640ec856       nvidia-device-plugin-daemonset-g8pzg                         kube-system
	f701c830b2d20       gcr.io/cloud-spanner-emulator/emulator@sha256:084e511546640743b2d25fe2ee59800bc7ec910acfc12175bad2270f159f5eba                               About a minute ago   Running             cloud-spanner-emulator                   0                   cae5734116223       cloud-spanner-emulator-5649ccbc87-t87sw                      default
	b01d3b9f8f811       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   810a34e48de11       metrics-server-5778bb4788-wmzpf                              kube-system
	72205a133899b       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   9fb2850701727       snapshot-controller-6588d87457-7g6fc                         kube-system
	2ddd8564488ea       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   62cff1591ee0c       registry-788cd7d5bc-sft95                                    kube-system
	60b83f290916a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   c3c6276cb3eb6       registry-proxy-dqhx6                                         kube-system
	cd76119f3dc58       ghcr.io/manusa/yakd@sha256:68bfcea671292190cdd2b127455726ac24794d1f7c55ce74c33d4648a3a0f50b                                                  About a minute ago   Running             yakd                                     0                   e1997547b6567       yakd-dashboard-7bcf5795cd-dz5gq                              yakd-dashboard
	b5818e80129c5       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   c9e3c56693563       kube-ingress-dns-minikube                                    kube-system
	dbd8b56c3e8ba       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                                                             About a minute ago   Running             coredns                                  0                   18b1d6a67aed7       coredns-7d764666f9-kwhzw                                     kube-system
	aafb6e810f661       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   79ab75a4044f4       storage-provisioner                                          kube-system
	3f46f018f577b       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           2 minutes ago        Running             kindnet-cni                              0                   fcd520403ccab       kindnet-xjkr6                                                kube-system
	b9bc8aa42a37b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                                                             2 minutes ago        Running             kube-proxy                               0                   b2b33bd56cf9e       kube-proxy-vlhc4                                             kube-system
	9c3c355e3c9b1       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                                                             2 minutes ago        Running             kube-scheduler                           0                   7e7d10846600b       kube-scheduler-addons-716851                                 kube-system
	321b58e58fcee       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                                                             2 minutes ago        Running             etcd                                     0                   e9ad6d9bf25fb       etcd-addons-716851                                           kube-system
	b431d42c9b706       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                                                             2 minutes ago        Running             kube-controller-manager                  0                   db65dfcaf3b84       kube-controller-manager-addons-716851                        kube-system
	b49246ec3babd       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                                                             2 minutes ago        Running             kube-apiserver                           0                   4f435bbe98b98       kube-apiserver-addons-716851                                 kube-system
	
	
	==> coredns [dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986] <==
	[INFO] 10.244.0.4:48595 - 20282 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001862486s
	[INFO] 10.244.0.4:48595 - 44579 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000229448s
	[INFO] 10.244.0.4:48595 - 47690 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000312935s
	[INFO] 10.244.0.4:52134 - 13616 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000263383s
	[INFO] 10.244.0.4:52134 - 13378 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000081026s
	[INFO] 10.244.0.4:36450 - 34925 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000138371s
	[INFO] 10.244.0.4:36450 - 34704 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075545s
	[INFO] 10.244.0.4:47543 - 4494 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000528097s
	[INFO] 10.244.0.4:47543 - 4314 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000141013s
	[INFO] 10.244.0.4:36564 - 45649 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001757608s
	[INFO] 10.244.0.4:36564 - 45830 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00186868s
	[INFO] 10.244.0.4:41025 - 55233 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134235s
	[INFO] 10.244.0.4:41025 - 54821 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000186962s
	[INFO] 10.244.0.20:51033 - 59016 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000258502s
	[INFO] 10.244.0.20:59162 - 20665 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000192402s
	[INFO] 10.244.0.20:56026 - 41508 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00027876s
	[INFO] 10.244.0.20:35237 - 9466 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00040978s
	[INFO] 10.244.0.20:39682 - 6062 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000163085s
	[INFO] 10.244.0.20:51154 - 28109 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110958s
	[INFO] 10.244.0.20:35723 - 28661 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001872047s
	[INFO] 10.244.0.20:60448 - 36576 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002403911s
	[INFO] 10.244.0.20:45888 - 55437 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001728983s
	[INFO] 10.244.0.20:54945 - 58447 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002824989s
	[INFO] 10.244.0.22:53060 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000237177s
	[INFO] 10.244.0.22:36599 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000198055s
	
	
	==> describe nodes <==
	Name:               addons-716851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-716851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=addons-716851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T09_30_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-716851
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-716851"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 09:30:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-716851
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 09:32:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 09:32:41 +0000   Sat, 27 Dec 2025 09:30:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 09:32:41 +0000   Sat, 27 Dec 2025 09:30:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 09:32:41 +0000   Sat, 27 Dec 2025 09:30:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 09:32:41 +0000   Sat, 27 Dec 2025 09:30:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-716851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                f5d02be4-1c86-43ce-b34f-e7d665864adf
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-5649ccbc87-t87sw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  gadget                      gadget-zv99m                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  gcp-auth                    gcp-auth-5bbcf684b5-vlrtj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-zbtqg    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         117s
	  kube-system                 coredns-7d764666f9-kwhzw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m5s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpathplugin-htnqp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 etcd-addons-716851                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m10s
	  kube-system                 kindnet-xjkr6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m5s
	  kube-system                 kube-apiserver-addons-716851                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-addons-716851        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-vlhc4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-scheduler-addons-716851                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 metrics-server-5778bb4788-wmzpf              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         118s
	  kube-system                 nvidia-device-plugin-daemonset-g8pzg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 registry-788cd7d5bc-sft95                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 registry-creds-567fb78d95-whf4f              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 registry-proxy-dqhx6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 snapshot-controller-6588d87457-7g6fc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 snapshot-controller-6588d87457-s2mx4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  local-path-storage          local-path-provisioner-c44bcd496-jlf4d       0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-dz5gq              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  2m6s  node-controller  Node addons-716851 event: Registered Node addons-716851 in Controller
	
	
	==> dmesg <==
	[Dec27 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015479] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.516409] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034238] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.771451] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.481009] kauditd_printk_skb: 39 callbacks suppressed
	[Dec27 08:29] hrtimer: interrupt took 43410871 ns
	[Dec27 09:29] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 09:30] overlayfs: idmapped layers are currently not supported
	[  +0.068519] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81] <==
	{"level":"info","ts":"2025-12-27T09:30:32.644385Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T09:30:33.188010Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T09:30:33.188121Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T09:30:33.188199Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-12-27T09:30:33.188260Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:30:33.188305Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T09:30:33.192003Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-27T09:30:33.192080Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:30:33.192125Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-12-27T09:30:33.192161Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-27T09:30:33.196148Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-716851 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T09:30:33.196324Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:30:33.196495Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:30:33.199985Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T09:30:33.209987Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T09:30:33.236140Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T09:30:33.237756Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:30:33.238668Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:30:33.238764Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T09:30:33.237993Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:30:33.238040Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T09:30:33.238824Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T09:30:33.241057Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T09:30:33.241630Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-12-27T09:30:33.242335Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [f2c80a14449fe30a67501107ac703626d5c650d273b384f738d9eff99e55ae39] <==
	2025/12/27 09:32:12 GCP Auth Webhook started!
	2025/12/27 09:32:14 Ready to marshal response ...
	2025/12/27 09:32:14 Ready to write response ...
	2025/12/27 09:32:15 Ready to marshal response ...
	2025/12/27 09:32:15 Ready to write response ...
	2025/12/27 09:32:15 Ready to marshal response ...
	2025/12/27 09:32:15 Ready to write response ...
	2025/12/27 09:32:36 Ready to marshal response ...
	2025/12/27 09:32:36 Ready to write response ...
	2025/12/27 09:32:37 Ready to marshal response ...
	2025/12/27 09:32:37 Ready to write response ...
	2025/12/27 09:32:38 Ready to marshal response ...
	2025/12/27 09:32:38 Ready to write response ...
	2025/12/27 09:32:45 Ready to marshal response ...
	2025/12/27 09:32:45 Ready to write response ...
	
	
	==> kernel <==
	 09:32:48 up  1:15,  0 user,  load average: 2.54, 2.75, 2.71
	Linux addons-716851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c] <==
	I1227 09:30:48.339298       1 controller.go:711] "Syncing nftables rules"
	I1227 09:30:58.116445       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:30:58.116589       1 main.go:301] handling current node
	I1227 09:31:08.116436       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:31:08.116470       1 main.go:301] handling current node
	I1227 09:31:18.117227       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:31:18.117266       1 main.go:301] handling current node
	I1227 09:31:28.116855       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:31:28.116899       1 main.go:301] handling current node
	I1227 09:31:38.117139       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:31:38.117194       1 main.go:301] handling current node
	I1227 09:31:48.117157       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:31:48.117228       1 main.go:301] handling current node
	I1227 09:31:58.123300       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:31:58.123336       1 main.go:301] handling current node
	I1227 09:32:08.120088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:32:08.120125       1 main.go:301] handling current node
	I1227 09:32:18.116454       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:32:18.116492       1 main.go:301] handling current node
	I1227 09:32:28.118272       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:32:28.118413       1 main.go:301] handling current node
	I1227 09:32:38.116542       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:32:38.116595       1 main.go:301] handling current node
	I1227 09:32:48.117535       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1227 09:32:48.117568       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c] <==
	W1227 09:30:51.821404       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1227 09:30:54.623088       1 alloc.go:329] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.98.251.38"}
	W1227 09:30:58.495192       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.251.38:443: connect: connection refused
	E1227 09:30:58.495240       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.251.38:443: connect: connection refused" logger="UnhandledError"
	W1227 09:30:58.497091       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.251.38:443: connect: connection refused
	E1227 09:30:58.497129       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.251.38:443: connect: connection refused" logger="UnhandledError"
	W1227 09:30:58.609568       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.251.38:443: connect: connection refused
	E1227 09:30:58.609615       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.251.38:443: connect: connection refused" logger="UnhandledError"
	W1227 09:31:12.670772       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 09:31:12.692845       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:31:12.754150       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1227 09:31:12.779382       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1227 09:31:25.092638       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.52.126:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.52.126:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.52.126:443: connect: connection refused" logger="UnhandledError"
	W1227 09:31:25.092880       1 handler_proxy.go:99] no RequestInfo found in the context
	E1227 09:31:25.092957       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1227 09:31:25.095073       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.52.126:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.52.126:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.52.126:443: connect: connection refused" logger="UnhandledError"
	E1227 09:31:25.099955       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.52.126:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.52.126:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.52.126:443: connect: connection refused" logger="UnhandledError"
	E1227 09:31:25.121995       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.52.126:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.52.126:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.52.126:443: connect: connection refused" logger="UnhandledError"
	E1227 09:31:25.163412       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.52.126:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.52.126:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.52.126:443: connect: connection refused" logger="UnhandledError"
	I1227 09:31:25.352524       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1227 09:32:24.711525       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40544: use of closed network connection
	E1227 09:32:24.846423       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40568: use of closed network connection
	
	
	==> kube-controller-manager [b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f] <==
	I1227 09:30:42.619404       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 09:30:42.619427       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:30:42.619432       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:42.621109       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:30:42.621730       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:42.618331       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:42.622410       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 09:30:42.622656       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="addons-716851"
	I1227 09:30:42.622965       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 09:30:42.639530       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:42.653527       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:42.676887       1 range_allocator.go:433] "Set node PodCIDR" node="addons-716851" podCIDRs=["10.244.0.0/24"]
	I1227 09:30:42.719840       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:42.723574       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 09:30:42.723586       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 09:30:42.723701       1 shared_informer.go:377] "Caches are synced"
	E1227 09:30:50.215514       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1227 09:31:02.625089       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	E1227 09:31:12.662297       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1227 09:31:12.662546       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1227 09:31:12.662659       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:31:12.738129       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1227 09:31:12.746050       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:31:12.763189       1 shared_informer.go:377] "Caches are synced"
	I1227 09:31:12.846527       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e] <==
	I1227 09:30:44.655719       1 server_linux.go:53] "Using iptables proxy"
	I1227 09:30:44.768315       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:30:44.869926       1 shared_informer.go:377] "Caches are synced"
	I1227 09:30:44.869960       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1227 09:30:44.870028       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 09:30:44.923257       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 09:30:44.923331       1 server_linux.go:136] "Using iptables Proxier"
	I1227 09:30:44.931121       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 09:30:44.931455       1 server.go:529] "Version info" version="v1.35.0"
	I1227 09:30:44.931479       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:30:44.933560       1 config.go:200] "Starting service config controller"
	I1227 09:30:44.933582       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 09:30:44.933604       1 config.go:106] "Starting endpoint slice config controller"
	I1227 09:30:44.933608       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 09:30:44.933620       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 09:30:44.933624       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 09:30:44.934244       1 config.go:309] "Starting node config controller"
	I1227 09:30:44.934262       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 09:30:44.934269       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 09:30:45.036365       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 09:30:45.036414       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 09:30:45.036441       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204] <==
	E1227 09:30:35.878193       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:30:35.878259       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:30:35.878303       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:30:35.878345       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:30:35.878390       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:30:35.878434       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:30:35.878476       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:30:35.878519       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:30:35.887776       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:30:35.887885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:30:35.888007       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:30:35.888126       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:30:35.888138       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:30:35.888191       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:30:35.888359       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:30:36.686818       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:30:36.712390       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:30:36.736303       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:30:36.834344       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 09:30:36.854094       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:30:36.887040       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:30:37.003777       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:30:37.082369       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:30:37.120733       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1227 09:30:40.026028       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 09:32:44 addons-716851 kubelet[1263]: I1227 09:32:44.706854    1263 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b256bc01-6c7e-48a7-8f66-e045c5a0b526-kube-api-access-4pj57" pod "b256bc01-6c7e-48a7-8f66-e045c5a0b526" (UID: "b256bc01-6c7e-48a7-8f66-e045c5a0b526"). InnerVolumeSpecName "kube-api-access-4pj57". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 27 09:32:44 addons-716851 kubelet[1263]: I1227 09:32:44.805619    1263 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4pj57\" (UniqueName: \"kubernetes.io/projected/b256bc01-6c7e-48a7-8f66-e045c5a0b526-kube-api-access-4pj57\") on node \"addons-716851\" DevicePath \"\""
	Dec 27 09:32:44 addons-716851 kubelet[1263]: I1227 09:32:44.805676    1263 reconciler_common.go:299] "Volume detached for volume \"pvc-996fb562-ecfc-48a4-90f6-7b63693bb059\" (UniqueName: \"kubernetes.io/host-path/b256bc01-6c7e-48a7-8f66-e045c5a0b526-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059\") on node \"addons-716851\" DevicePath \"\""
	Dec 27 09:32:44 addons-716851 kubelet[1263]: I1227 09:32:44.805690    1263 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b256bc01-6c7e-48a7-8f66-e045c5a0b526-gcp-creds\") on node \"addons-716851\" DevicePath \"\""
	Dec 27 09:32:45 addons-716851 kubelet[1263]: I1227 09:32:45.491353    1263 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b21672f642cc0312c04c48abea4f6fae8764545ffaca117e04ab0c30daf156a"
	Dec 27 09:32:45 addons-716851 kubelet[1263]: I1227 09:32:45.914601    1263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/33c3571f-a30b-44c0-9ba2-22e262791fb3-data\") pod \"helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059\" (UID: \"33c3571f-a30b-44c0-9ba2-22e262791fb3\") " pod="local-path-storage/helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059"
	Dec 27 09:32:45 addons-716851 kubelet[1263]: I1227 09:32:45.915320    1263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pmjm\" (UniqueName: \"kubernetes.io/projected/33c3571f-a30b-44c0-9ba2-22e262791fb3-kube-api-access-8pmjm\") pod \"helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059\" (UID: \"33c3571f-a30b-44c0-9ba2-22e262791fb3\") " pod="local-path-storage/helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059"
	Dec 27 09:32:45 addons-716851 kubelet[1263]: I1227 09:32:45.915492    1263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/33c3571f-a30b-44c0-9ba2-22e262791fb3-gcp-creds\") pod \"helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059\" (UID: \"33c3571f-a30b-44c0-9ba2-22e262791fb3\") " pod="local-path-storage/helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059"
	Dec 27 09:32:45 addons-716851 kubelet[1263]: I1227 09:32:45.915622    1263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/33c3571f-a30b-44c0-9ba2-22e262791fb3-script\") pod \"helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059\" (UID: \"33c3571f-a30b-44c0-9ba2-22e262791fb3\") " pod="local-path-storage/helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059"
	Dec 27 09:32:46 addons-716851 kubelet[1263]: W1227 09:32:46.198424    1263 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/dd50287978e2f660a8499c3f3df283d7c72a30ddc502b6f90f9d306958042807/crio-388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb WatchSource:0}: Error finding container 388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb: Status 404 returned error can't find the container with id 388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb
	Dec 27 09:32:46 addons-716851 kubelet[1263]: I1227 09:32:46.551316    1263 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b256bc01-6c7e-48a7-8f66-e045c5a0b526" path="/var/lib/kubelet/pods/b256bc01-6c7e-48a7-8f66-e045c5a0b526/volumes"
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.642143    1263 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/33c3571f-a30b-44c0-9ba2-22e262791fb3-kube-api-access-8pmjm\" (UniqueName: \"kubernetes.io/projected/33c3571f-a30b-44c0-9ba2-22e262791fb3-kube-api-access-8pmjm\") pod \"33c3571f-a30b-44c0-9ba2-22e262791fb3\" (UID: \"33c3571f-a30b-44c0-9ba2-22e262791fb3\") "
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.642577    1263 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/33c3571f-a30b-44c0-9ba2-22e262791fb3-gcp-creds\" (UniqueName: \"kubernetes.io/host-path/33c3571f-a30b-44c0-9ba2-22e262791fb3-gcp-creds\") pod \"33c3571f-a30b-44c0-9ba2-22e262791fb3\" (UID: \"33c3571f-a30b-44c0-9ba2-22e262791fb3\") "
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.642623    1263 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/33c3571f-a30b-44c0-9ba2-22e262791fb3-script\" (UniqueName: \"kubernetes.io/configmap/33c3571f-a30b-44c0-9ba2-22e262791fb3-script\") pod \"33c3571f-a30b-44c0-9ba2-22e262791fb3\" (UID: \"33c3571f-a30b-44c0-9ba2-22e262791fb3\") "
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.642649    1263 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/33c3571f-a30b-44c0-9ba2-22e262791fb3-data\" (UniqueName: \"kubernetes.io/host-path/33c3571f-a30b-44c0-9ba2-22e262791fb3-data\") pod \"33c3571f-a30b-44c0-9ba2-22e262791fb3\" (UID: \"33c3571f-a30b-44c0-9ba2-22e262791fb3\") "
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.642815    1263 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33c3571f-a30b-44c0-9ba2-22e262791fb3-data" pod "33c3571f-a30b-44c0-9ba2-22e262791fb3" (UID: "33c3571f-a30b-44c0-9ba2-22e262791fb3"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.642845    1263 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33c3571f-a30b-44c0-9ba2-22e262791fb3-gcp-creds" pod "33c3571f-a30b-44c0-9ba2-22e262791fb3" (UID: "33c3571f-a30b-44c0-9ba2-22e262791fb3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.643128    1263 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c3571f-a30b-44c0-9ba2-22e262791fb3-script" pod "33c3571f-a30b-44c0-9ba2-22e262791fb3" (UID: "33c3571f-a30b-44c0-9ba2-22e262791fb3"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.664403    1263 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c3571f-a30b-44c0-9ba2-22e262791fb3-kube-api-access-8pmjm" pod "33c3571f-a30b-44c0-9ba2-22e262791fb3" (UID: "33c3571f-a30b-44c0-9ba2-22e262791fb3"). InnerVolumeSpecName "kube-api-access-8pmjm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.744123    1263 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/33c3571f-a30b-44c0-9ba2-22e262791fb3-data\") on node \"addons-716851\" DevicePath \"\""
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.744165    1263 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pmjm\" (UniqueName: \"kubernetes.io/projected/33c3571f-a30b-44c0-9ba2-22e262791fb3-kube-api-access-8pmjm\") on node \"addons-716851\" DevicePath \"\""
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.744179    1263 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/33c3571f-a30b-44c0-9ba2-22e262791fb3-gcp-creds\") on node \"addons-716851\" DevicePath \"\""
	Dec 27 09:32:47 addons-716851 kubelet[1263]: I1227 09:32:47.744188    1263 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/33c3571f-a30b-44c0-9ba2-22e262791fb3-script\") on node \"addons-716851\" DevicePath \"\""
	Dec 27 09:32:48 addons-716851 kubelet[1263]: I1227 09:32:48.508543    1263 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388ee9fa53dbc2652b393b6ff88c352915697d5dfea2926cdd38bee632b675eb"
	Dec 27 09:32:48 addons-716851 kubelet[1263]: E1227 09:32:48.510572    1263 status_manager.go:1045] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059\" is forbidden: User \"system:node:addons-716851\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-716851' and this object" podUID="33c3571f-a30b-44c0-9ba2-22e262791fb3" pod="local-path-storage/helper-pod-delete-pvc-996fb562-ecfc-48a4-90f6-7b63693bb059"
	
	
	==> storage-provisioner [aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c] <==
	W1227 09:32:23.774125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:25.777744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:25.784720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:27.787574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:27.792125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:29.795601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:29.800116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:31.803009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:31.808057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:33.811041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:33.815926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:35.819461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:35.826153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:37.834186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:37.845304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:39.849674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:39.856823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:41.860130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:41.866229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:43.869907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:43.876457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:45.892526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:45.906229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:47.910422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 09:32:47.917459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-716851 -n addons-716851
helpers_test.go:270: (dbg) Run:  kubectl --context addons-716851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-fbmj5 ingress-nginx-admission-patch-pmg49 registry-creds-567fb78d95-whf4f
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-716851 describe pod ingress-nginx-admission-create-fbmj5 ingress-nginx-admission-patch-pmg49 registry-creds-567fb78d95-whf4f
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-716851 describe pod ingress-nginx-admission-create-fbmj5 ingress-nginx-admission-patch-pmg49 registry-creds-567fb78d95-whf4f: exit status 1 (85.199392ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fbmj5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pmg49" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-whf4f" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-716851 describe pod ingress-nginx-admission-create-fbmj5 ingress-nginx-admission-patch-pmg49 registry-creds-567fb78d95-whf4f: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable headlamp --alsologtostderr -v=1: exit status 11 (266.156942ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:32:49.468735  307914 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:32:49.469489  307914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:49.469531  307914 out.go:374] Setting ErrFile to fd 2...
	I1227 09:32:49.469552  307914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:49.469964  307914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:32:49.470417  307914 mustload.go:66] Loading cluster: addons-716851
	I1227 09:32:49.471105  307914 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:49.471151  307914 addons.go:622] checking whether the cluster is paused
	I1227 09:32:49.471327  307914 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:49.471367  307914 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:32:49.472483  307914 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:32:49.490449  307914 ssh_runner.go:195] Run: systemctl --version
	I1227 09:32:49.490513  307914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:32:49.508435  307914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:32:49.611328  307914 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:32:49.611417  307914 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:32:49.658375  307914 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:32:49.658402  307914 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:32:49.658409  307914 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:32:49.658412  307914 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:32:49.658416  307914 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:32:49.658420  307914 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:32:49.658423  307914 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:32:49.658433  307914 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:32:49.658436  307914 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:32:49.658442  307914 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:32:49.658445  307914 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:32:49.658449  307914 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:32:49.658453  307914 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:32:49.658457  307914 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:32:49.658471  307914 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:32:49.658476  307914 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:32:49.658480  307914 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:32:49.658485  307914 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:32:49.658495  307914 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:32:49.658502  307914 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:32:49.658507  307914 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:32:49.658510  307914 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:32:49.658513  307914 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:32:49.658524  307914 cri.go:96] found id: ""
	I1227 09:32:49.658586  307914 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:32:49.674617  307914 out.go:203] 
	W1227 09:32:49.677570  307914 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:32:49.677599  307914 out.go:285] * 
	* 
	W1227 09:32:49.679599  307914 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:32:49.682407  307914 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-t87sw" [c5db17e9-740d-4ed7-9667-57ac86d40635] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006694737s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (369.611514ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:32:46.097878  307320 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:32:46.098724  307320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:46.098764  307320 out.go:374] Setting ErrFile to fd 2...
	I1227 09:32:46.098786  307320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:46.099189  307320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:32:46.099681  307320 mustload.go:66] Loading cluster: addons-716851
	I1227 09:32:46.103497  307320 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:46.103581  307320 addons.go:622] checking whether the cluster is paused
	I1227 09:32:46.103796  307320 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:46.103837  307320 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:32:46.104669  307320 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:32:46.134181  307320 ssh_runner.go:195] Run: systemctl --version
	I1227 09:32:46.134258  307320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:32:46.163575  307320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:32:46.280316  307320 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:32:46.280407  307320 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:32:46.365004  307320 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:32:46.365025  307320 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:32:46.365030  307320 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:32:46.365034  307320 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:32:46.365037  307320 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:32:46.365040  307320 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:32:46.365044  307320 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:32:46.365046  307320 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:32:46.365050  307320 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:32:46.365058  307320 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:32:46.365061  307320 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:32:46.365064  307320 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:32:46.365067  307320 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:32:46.365070  307320 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:32:46.365073  307320 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:32:46.365078  307320 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:32:46.365081  307320 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:32:46.365085  307320 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:32:46.365088  307320 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:32:46.365091  307320 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:32:46.365096  307320 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:32:46.365106  307320 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:32:46.365109  307320 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:32:46.365112  307320 cri.go:96] found id: ""
	I1227 09:32:46.365162  307320 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:32:46.393979  307320 out.go:203] 
	W1227 09:32:46.397974  307320 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:32:46.398003  307320 out.go:285] * 
	* 
	W1227 09:32:46.400186  307320 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:32:46.403087  307320 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.59s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-716851 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-716851 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-716851 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [b256bc01-6c7e-48a7-8f66-e045c5a0b526] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [b256bc01-6c7e-48a7-8f66-e045c5a0b526] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [b256bc01-6c7e-48a7-8f66-e045c5a0b526] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004190583s
addons_test.go:969: (dbg) Run:  kubectl --context addons-716851 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 ssh "cat /opt/local-path-provisioner/pvc-996fb562-ecfc-48a4-90f6-7b63693bb059_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-716851 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-716851 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (369.521156ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:32:45.934233  307293 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:32:45.935146  307293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:45.935195  307293 out.go:374] Setting ErrFile to fd 2...
	I1227 09:32:45.935272  307293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:45.935594  307293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:32:45.935957  307293 mustload.go:66] Loading cluster: addons-716851
	I1227 09:32:45.936481  307293 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:45.936528  307293 addons.go:622] checking whether the cluster is paused
	I1227 09:32:45.936675  307293 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:45.936707  307293 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:32:45.937313  307293 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:32:45.962168  307293 ssh_runner.go:195] Run: systemctl --version
	I1227 09:32:45.962233  307293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:32:45.986854  307293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:32:46.107565  307293 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:32:46.107665  307293 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:32:46.182482  307293 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:32:46.182508  307293 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:32:46.182512  307293 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:32:46.182516  307293 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:32:46.182519  307293 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:32:46.182523  307293 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:32:46.182526  307293 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:32:46.182529  307293 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:32:46.182532  307293 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:32:46.182538  307293 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:32:46.182542  307293 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:32:46.182545  307293 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:32:46.182547  307293 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:32:46.182550  307293 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:32:46.182553  307293 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:32:46.182562  307293 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:32:46.182565  307293 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:32:46.182569  307293 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:32:46.182572  307293 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:32:46.182576  307293 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:32:46.182580  307293 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:32:46.182583  307293 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:32:46.182586  307293 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:32:46.182588  307293 cri.go:96] found id: ""
	I1227 09:32:46.182641  307293 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:32:46.222824  307293 out.go:203] 
	W1227 09:32:46.226168  307293 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:32:46.226202  307293 out.go:285] * 
	* 
	W1227 09:32:46.228295  307293 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:32:46.232320  307293 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-g8pzg" [f230629c-24b3-4233-a77a-47f16f582bb6] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003196081s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (259.374693ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:32:37.438127  306873 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:32:37.439103  306873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:37.439146  306873 out.go:374] Setting ErrFile to fd 2...
	I1227 09:32:37.439169  306873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:37.439502  306873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:32:37.439878  306873 mustload.go:66] Loading cluster: addons-716851
	I1227 09:32:37.440368  306873 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:37.440417  306873 addons.go:622] checking whether the cluster is paused
	I1227 09:32:37.440577  306873 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:37.440612  306873 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:32:37.441198  306873 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:32:37.466378  306873 ssh_runner.go:195] Run: systemctl --version
	I1227 09:32:37.466438  306873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:32:37.485297  306873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:32:37.583141  306873 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:32:37.583228  306873 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:32:37.617187  306873 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:32:37.617248  306873 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:32:37.617269  306873 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:32:37.617293  306873 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:32:37.617328  306873 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:32:37.617355  306873 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:32:37.617377  306873 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:32:37.617411  306873 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:32:37.617444  306873 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:32:37.617498  306873 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:32:37.617518  306873 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:32:37.617547  306873 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:32:37.617570  306873 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:32:37.617590  306873 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:32:37.617611  306873 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:32:37.617655  306873 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:32:37.617677  306873 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:32:37.617699  306873 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:32:37.617718  306873 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:32:37.617738  306873 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:32:37.617770  306873 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:32:37.617793  306873 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:32:37.617814  306873 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:32:37.617836  306873 cri.go:96] found id: ""
	I1227 09:32:37.617918  306873 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:32:37.634858  306873 out.go:203] 
	W1227 09:32:37.638167  306873 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:32:37.638192  306873 out.go:285] * 
	* 
	W1227 09:32:37.640451  306873 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:32:37.643815  306873 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-dz5gq" [d0b37b3f-0909-4e18-a407-1cc57bdb1626] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004603176s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-716851 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-716851 addons disable yakd --alsologtostderr -v=1: exit status 11 (274.883091ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:32:31.169176  306775 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:32:31.170095  306775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:31.170154  306775 out.go:374] Setting ErrFile to fd 2...
	I1227 09:32:31.170178  306775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:32:31.170504  306775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:32:31.170877  306775 mustload.go:66] Loading cluster: addons-716851
	I1227 09:32:31.171333  306775 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:31.171402  306775 addons.go:622] checking whether the cluster is paused
	I1227 09:32:31.171554  306775 config.go:182] Loaded profile config "addons-716851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:32:31.171596  306775 host.go:66] Checking if "addons-716851" exists ...
	I1227 09:32:31.172284  306775 cli_runner.go:164] Run: docker container inspect addons-716851 --format={{.State.Status}}
	I1227 09:32:31.192171  306775 ssh_runner.go:195] Run: systemctl --version
	I1227 09:32:31.192236  306775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716851
	I1227 09:32:31.210859  306775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/addons-716851/id_rsa Username:docker}
	I1227 09:32:31.318693  306775 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:32:31.318795  306775 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:32:31.356847  306775 cri.go:96] found id: "3aff66e9fa8d00e43c2c9d3d7eb0c062802042a2b7edbe0caf0cdbd2caea7065"
	I1227 09:32:31.356872  306775 cri.go:96] found id: "dca9a42c7a04662a90726301725b9931917b0bed21e1ceade6b4f4b8d24a8652"
	I1227 09:32:31.356878  306775 cri.go:96] found id: "0d85776b3afdea69cbb5ee8e93b3331310119c29c2a26aeb49f7ef3a1457b628"
	I1227 09:32:31.356882  306775 cri.go:96] found id: "8ec7b4bf4c8346be437b9c1220c7cb1ec5176aefc76da65630e0b6ba275188f3"
	I1227 09:32:31.356885  306775 cri.go:96] found id: "84effb21b12c413614562035d2447b66f011b25960c20cbca25c810cb260273c"
	I1227 09:32:31.356889  306775 cri.go:96] found id: "1981e5d769fb2e5c1c793a9a15f2a36fd726b1f1e88517359f2324bc71d46d2d"
	I1227 09:32:31.356892  306775 cri.go:96] found id: "3fbcb3867c9910181b0946d1270518de128ea2336b1698475425abe29c1a1733"
	I1227 09:32:31.356895  306775 cri.go:96] found id: "94372f90138c67fc182446293347a77343886ca8f959306731a6ed6f3836edb4"
	I1227 09:32:31.356898  306775 cri.go:96] found id: "3f588dfa11bc04525984e451d7264dc06cbc774de2a2a16517174a06ec6339b2"
	I1227 09:32:31.356904  306775 cri.go:96] found id: "0443b7b8b5486aba7b30633fd47c6cd12c05748dfd16462eaa03c8ae8b4f1fc0"
	I1227 09:32:31.356914  306775 cri.go:96] found id: "b01d3b9f8f811cc32b885b63da674212f1172a44ac66ac7d43522be5e957c26e"
	I1227 09:32:31.356917  306775 cri.go:96] found id: "72205a133899ba5a23ed5afcc247fce01e3404c4ac1dc21ca9e643bc987d149e"
	I1227 09:32:31.356921  306775 cri.go:96] found id: "2ddd8564488ea88f96c3379c52b0d283f6c98067f440ff0b01391ec9825a2a3e"
	I1227 09:32:31.356924  306775 cri.go:96] found id: "60b83f290916a53e3827a5162fc5fadf7d1fc6dc013034d31ce986c099fadd5a"
	I1227 09:32:31.356928  306775 cri.go:96] found id: "b5818e80129c5af128761d7accd8aaf32ec81eecb423401d2dfde8b164fb1c1f"
	I1227 09:32:31.356933  306775 cri.go:96] found id: "dbd8b56c3e8ba3b2e5e05aa48d306bc5a158831d2ca3faf4beea8ae7acad7986"
	I1227 09:32:31.356936  306775 cri.go:96] found id: "aafb6e810f661e352a5b467c68d705cee23b6942cf19b43c16ec9f1184bf7d1c"
	I1227 09:32:31.356940  306775 cri.go:96] found id: "3f46f018f577b1d6dfe39bd9808daf5abed039ef57d4bf71d7d723afe4f67a7c"
	I1227 09:32:31.356943  306775 cri.go:96] found id: "b9bc8aa42a37bd90bf8b4e102f5493ed4231e990e7782654b1a8e90af1373c2e"
	I1227 09:32:31.356946  306775 cri.go:96] found id: "9c3c355e3c9b11b1a1dcedd1665a4477744849590615afd8b3568209fe411204"
	I1227 09:32:31.356951  306775 cri.go:96] found id: "321b58e58fceeb7bcdb049dbeca6a09c67597eba5f2d8a0eb5fe4d604a17fd81"
	I1227 09:32:31.356958  306775 cri.go:96] found id: "b431d42c9b706def6be520e8bed8fe59f5c900644a03b423b0ae0e1c3c99c69f"
	I1227 09:32:31.356961  306775 cri.go:96] found id: "b49246ec3babd011e8b27fb7315532bd9597de705c9410f7e75fb0fdabdb769c"
	I1227 09:32:31.356965  306775 cri.go:96] found id: ""
	I1227 09:32:31.357019  306775 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 09:32:31.372123  306775 out.go:203] 
	W1227 09:32:31.375117  306775 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:32:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 09:32:31.375144  306775 out.go:285] * 
	* 
	W1227 09:32:31.377179  306775 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:32:31.380203  306775 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-716851 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestForceSystemdFlag (506.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-915850 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1227 10:21:42.760247  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:22:15.339374  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-915850 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m22.10982217s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-915850] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-915850" primary control-plane node in "force-systemd-flag-915850" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:20:31.930680  478121 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:20:31.930791  478121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:20:31.930802  478121 out.go:374] Setting ErrFile to fd 2...
	I1227 10:20:31.930808  478121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:20:31.931055  478121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:20:31.931474  478121 out.go:368] Setting JSON to false
	I1227 10:20:31.932343  478121 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7385,"bootTime":1766823447,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:20:31.932412  478121 start.go:143] virtualization:  
	I1227 10:20:31.936368  478121 out.go:179] * [force-systemd-flag-915850] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:20:31.940642  478121 notify.go:221] Checking for updates...
	I1227 10:20:31.944143  478121 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:20:31.947434  478121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:20:31.950708  478121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:20:31.953969  478121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:20:31.957084  478121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:20:31.960150  478121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:20:31.963591  478121 config.go:182] Loaded profile config "force-systemd-env-193016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:20:31.963715  478121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:20:31.994124  478121 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:20:31.994268  478121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:20:32.052863  478121 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:20:32.042998902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:20:32.052974  478121 docker.go:319] overlay module found
	I1227 10:20:32.056170  478121 out.go:179] * Using the docker driver based on user configuration
	I1227 10:20:32.058993  478121 start.go:309] selected driver: docker
	I1227 10:20:32.059011  478121 start.go:928] validating driver "docker" against <nil>
	I1227 10:20:32.059026  478121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:20:32.059808  478121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:20:32.122957  478121 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:20:32.113247523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:20:32.123179  478121 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:20:32.123450  478121 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 10:20:32.126448  478121 out.go:179] * Using Docker driver with root privileges
	I1227 10:20:32.129481  478121 cni.go:84] Creating CNI manager for ""
	I1227 10:20:32.129555  478121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:20:32.129572  478121 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:20:32.129657  478121 start.go:353] cluster config:
	{Name:force-systemd-flag-915850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-915850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:20:32.132782  478121 out.go:179] * Starting "force-systemd-flag-915850" primary control-plane node in "force-systemd-flag-915850" cluster
	I1227 10:20:32.135655  478121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:20:32.138548  478121 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:20:32.141387  478121 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:20:32.141443  478121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:20:32.141454  478121 cache.go:65] Caching tarball of preloaded images
	I1227 10:20:32.141483  478121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:20:32.141545  478121 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:20:32.141556  478121 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:20:32.141670  478121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/config.json ...
	I1227 10:20:32.141687  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/config.json: {Name:mkd19636fe146d268a0d96b5322f2c1789c1ceab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:32.166287  478121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:20:32.166316  478121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:20:32.166332  478121 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:20:32.166365  478121 start.go:360] acquireMachinesLock for force-systemd-flag-915850: {Name:mk78a9e4e2c08cc91e948e8e89883b32b257e41b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:20:32.166510  478121 start.go:364] duration metric: took 123.489µs to acquireMachinesLock for "force-systemd-flag-915850"
	I1227 10:20:32.166544  478121 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-915850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-915850 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:20:32.166616  478121 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:20:32.170129  478121 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:20:32.170385  478121 start.go:159] libmachine.API.Create for "force-systemd-flag-915850" (driver="docker")
	I1227 10:20:32.170424  478121 client.go:173] LocalClient.Create starting
	I1227 10:20:32.170498  478121 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:20:32.170543  478121 main.go:144] libmachine: Decoding PEM data...
	I1227 10:20:32.170564  478121 main.go:144] libmachine: Parsing certificate...
	I1227 10:20:32.170622  478121 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:20:32.170656  478121 main.go:144] libmachine: Decoding PEM data...
	I1227 10:20:32.170667  478121 main.go:144] libmachine: Parsing certificate...
	I1227 10:20:32.171065  478121 cli_runner.go:164] Run: docker network inspect force-systemd-flag-915850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:20:32.188748  478121 cli_runner.go:211] docker network inspect force-systemd-flag-915850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:20:32.188832  478121 network_create.go:284] running [docker network inspect force-systemd-flag-915850] to gather additional debugging logs...
	I1227 10:20:32.188857  478121 cli_runner.go:164] Run: docker network inspect force-systemd-flag-915850
	W1227 10:20:32.204458  478121 cli_runner.go:211] docker network inspect force-systemd-flag-915850 returned with exit code 1
	I1227 10:20:32.204489  478121 network_create.go:287] error running [docker network inspect force-systemd-flag-915850]: docker network inspect force-systemd-flag-915850: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-915850 not found
	I1227 10:20:32.204503  478121 network_create.go:289] output of [docker network inspect force-systemd-flag-915850]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-915850 not found
	
	** /stderr **
	I1227 10:20:32.204632  478121 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:20:32.221766  478121 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:20:32.222212  478121 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:20:32.222527  478121 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:20:32.222904  478121 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9e1e2556e14b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:a6:7b:e1:e3:10} reservation:<nil>}
	I1227 10:20:32.223395  478121 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2ba30}
	I1227 10:20:32.223418  478121 network_create.go:124] attempt to create docker network force-systemd-flag-915850 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 10:20:32.223480  478121 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-915850 force-systemd-flag-915850
	I1227 10:20:32.284343  478121 network_create.go:108] docker network force-systemd-flag-915850 192.168.85.0/24 created
	I1227 10:20:32.284388  478121 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-915850" container
	I1227 10:20:32.284464  478121 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:20:32.300734  478121 cli_runner.go:164] Run: docker volume create force-systemd-flag-915850 --label name.minikube.sigs.k8s.io=force-systemd-flag-915850 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:20:32.318657  478121 oci.go:103] Successfully created a docker volume force-systemd-flag-915850
	I1227 10:20:32.318742  478121 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-915850-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-915850 --entrypoint /usr/bin/test -v force-systemd-flag-915850:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:20:32.875591  478121 oci.go:107] Successfully prepared a docker volume force-systemd-flag-915850
	I1227 10:20:32.875665  478121 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:20:32.875677  478121 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:20:32.875757  478121 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-915850:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:20:36.766042  478121 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-915850:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.890244248s)
	I1227 10:20:36.766074  478121 kic.go:203] duration metric: took 3.890393649s to extract preloaded images to volume ...
	W1227 10:20:36.766221  478121 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:20:36.766356  478121 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:20:36.822869  478121 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-915850 --name force-systemd-flag-915850 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-915850 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-915850 --network force-systemd-flag-915850 --ip 192.168.85.2 --volume force-systemd-flag-915850:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:20:37.154443  478121 cli_runner.go:164] Run: docker container inspect force-systemd-flag-915850 --format={{.State.Running}}
	I1227 10:20:37.182402  478121 cli_runner.go:164] Run: docker container inspect force-systemd-flag-915850 --format={{.State.Status}}
	I1227 10:20:37.206113  478121 cli_runner.go:164] Run: docker exec force-systemd-flag-915850 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:20:37.263728  478121 oci.go:144] the created container "force-systemd-flag-915850" has a running status.
	I1227 10:20:37.263757  478121 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa...
	I1227 10:20:37.463439  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 10:20:37.463490  478121 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:20:37.493943  478121 cli_runner.go:164] Run: docker container inspect force-systemd-flag-915850 --format={{.State.Status}}
	I1227 10:20:37.529134  478121 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:20:37.529154  478121 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-915850 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:20:37.586308  478121 cli_runner.go:164] Run: docker container inspect force-systemd-flag-915850 --format={{.State.Status}}
	I1227 10:20:37.613588  478121 machine.go:94] provisionDockerMachine start ...
	I1227 10:20:37.613693  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:37.641839  478121 main.go:144] libmachine: Using SSH client type: native
	I1227 10:20:37.642931  478121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 10:20:37.642959  478121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:20:37.644177  478121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:20:40.788587  478121 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-915850
	
	I1227 10:20:40.788612  478121 ubuntu.go:182] provisioning hostname "force-systemd-flag-915850"
	I1227 10:20:40.788680  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:40.806854  478121 main.go:144] libmachine: Using SSH client type: native
	I1227 10:20:40.807184  478121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 10:20:40.807202  478121 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-915850 && echo "force-systemd-flag-915850" | sudo tee /etc/hostname
	I1227 10:20:40.961754  478121 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-915850
	
	I1227 10:20:40.961837  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:40.982618  478121 main.go:144] libmachine: Using SSH client type: native
	I1227 10:20:40.982938  478121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 10:20:40.982961  478121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-915850' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-915850/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-915850' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:20:41.120069  478121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:20:41.120098  478121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:20:41.120125  478121 ubuntu.go:190] setting up certificates
	I1227 10:20:41.120134  478121 provision.go:84] configureAuth start
	I1227 10:20:41.120196  478121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-915850
	I1227 10:20:41.137712  478121 provision.go:143] copyHostCerts
	I1227 10:20:41.137752  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:20:41.137784  478121 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:20:41.137800  478121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:20:41.137879  478121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:20:41.137966  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:20:41.137988  478121 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:20:41.137993  478121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:20:41.138026  478121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:20:41.138072  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:20:41.138091  478121 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:20:41.138099  478121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:20:41.138125  478121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:20:41.138182  478121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-915850 san=[127.0.0.1 192.168.85.2 force-systemd-flag-915850 localhost minikube]
	I1227 10:20:41.518101  478121 provision.go:177] copyRemoteCerts
	I1227 10:20:41.518175  478121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:20:41.518227  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:41.539095  478121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa Username:docker}
	I1227 10:20:41.639995  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 10:20:41.640057  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:20:41.658000  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 10:20:41.658067  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 10:20:41.676069  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 10:20:41.676148  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:20:41.694282  478121 provision.go:87] duration metric: took 574.131042ms to configureAuth
	I1227 10:20:41.694308  478121 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:20:41.694495  478121 config.go:182] Loaded profile config "force-systemd-flag-915850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:20:41.694611  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:41.711990  478121 main.go:144] libmachine: Using SSH client type: native
	I1227 10:20:41.712302  478121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 10:20:41.712319  478121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:20:41.996093  478121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:20:41.996120  478121 machine.go:97] duration metric: took 4.38249464s to provisionDockerMachine
	I1227 10:20:41.996132  478121 client.go:176] duration metric: took 9.825695738s to LocalClient.Create
	I1227 10:20:41.996175  478121 start.go:167] duration metric: took 9.825791689s to libmachine.API.Create "force-systemd-flag-915850"
	I1227 10:20:41.996196  478121 start.go:293] postStartSetup for "force-systemd-flag-915850" (driver="docker")
	I1227 10:20:41.996207  478121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:20:41.996319  478121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:20:41.996389  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:42.018453  478121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa Username:docker}
	I1227 10:20:42.122612  478121 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:20:42.126834  478121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:20:42.126865  478121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:20:42.126879  478121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:20:42.126941  478121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:20:42.127027  478121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:20:42.127034  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 10:20:42.127146  478121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:20:42.136790  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:20:42.160765  478121 start.go:296] duration metric: took 164.552396ms for postStartSetup
	I1227 10:20:42.161206  478121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-915850
	I1227 10:20:42.181132  478121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/config.json ...
	I1227 10:20:42.181481  478121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:20:42.181552  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:42.203830  478121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa Username:docker}
	I1227 10:20:42.305366  478121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:20:42.310662  478121 start.go:128] duration metric: took 10.144029225s to createHost
	I1227 10:20:42.310689  478121 start.go:83] releasing machines lock for "force-systemd-flag-915850", held for 10.144162675s
	I1227 10:20:42.310786  478121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-915850
	I1227 10:20:42.328350  478121 ssh_runner.go:195] Run: cat /version.json
	I1227 10:20:42.328404  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:42.328411  478121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:20:42.328483  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:42.346992  478121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa Username:docker}
	I1227 10:20:42.361586  478121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa Username:docker}
	I1227 10:20:42.543831  478121 ssh_runner.go:195] Run: systemctl --version
	I1227 10:20:42.550509  478121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:20:42.586366  478121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:20:42.591753  478121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:20:42.591850  478121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:20:42.619730  478121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:20:42.619765  478121 start.go:496] detecting cgroup driver to use...
	I1227 10:20:42.619780  478121 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 10:20:42.619846  478121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:20:42.637649  478121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:20:42.650429  478121 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:20:42.650516  478121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:20:42.668294  478121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:20:42.687085  478121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:20:42.796482  478121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:20:42.928169  478121 docker.go:234] disabling docker service ...
	I1227 10:20:42.928302  478121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:20:42.950479  478121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:20:42.968582  478121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:20:43.105389  478121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:20:43.226117  478121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:20:43.240585  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:20:43.254946  478121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:20:43.255057  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.264340  478121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 10:20:43.264464  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.273984  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.282800  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.292034  478121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:20:43.300658  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.309373  478121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.323655  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.332964  478121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:20:43.341203  478121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:20:43.348813  478121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:20:43.466821  478121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:20:43.632102  478121 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:20:43.632238  478121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:20:43.636120  478121 start.go:574] Will wait 60s for crictl version
	I1227 10:20:43.636220  478121 ssh_runner.go:195] Run: which crictl
	I1227 10:20:43.639739  478121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:20:43.668484  478121 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:20:43.668595  478121 ssh_runner.go:195] Run: crio --version
	I1227 10:20:43.699006  478121 ssh_runner.go:195] Run: crio --version
	I1227 10:20:43.746306  478121 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:20:43.749234  478121 cli_runner.go:164] Run: docker network inspect force-systemd-flag-915850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:20:43.767455  478121 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:20:43.774854  478121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:20:43.785756  478121 kubeadm.go:884] updating cluster {Name:force-systemd-flag-915850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-915850 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:20:43.785878  478121 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:20:43.785944  478121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:20:43.824116  478121 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:20:43.824141  478121 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:20:43.824203  478121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:20:43.855122  478121 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:20:43.855147  478121 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:20:43.855155  478121 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 10:20:43.855246  478121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-915850 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-915850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:20:43.855334  478121 ssh_runner.go:195] Run: crio config
	I1227 10:20:43.913068  478121 cni.go:84] Creating CNI manager for ""
	I1227 10:20:43.913159  478121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:20:43.913205  478121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:20:43.913270  478121 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-915850 NodeName:force-systemd-flag-915850 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:20:43.913565  478121 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-915850"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:20:43.913689  478121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:20:43.921731  478121 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:20:43.921863  478121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:20:43.929963  478121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1227 10:20:43.943925  478121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:20:43.956945  478121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1227 10:20:43.970469  478121 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:20:43.974056  478121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:20:43.983380  478121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:20:44.102811  478121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:20:44.118920  478121 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850 for IP: 192.168.85.2
	I1227 10:20:44.118941  478121 certs.go:195] generating shared ca certs ...
	I1227 10:20:44.118958  478121 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:44.119112  478121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:20:44.119176  478121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:20:44.119191  478121 certs.go:257] generating profile certs ...
	I1227 10:20:44.119249  478121 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.key
	I1227 10:20:44.119276  478121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.crt with IP's: []
	I1227 10:20:44.403414  478121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.crt ...
	I1227 10:20:44.403449  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.crt: {Name:mkec717e6e011496cd9c1f8bc74cfe8adde984bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:44.403657  478121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.key ...
	I1227 10:20:44.403674  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.key: {Name:mk65632f861bdd44283621ad64eec0c5ca7b8982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:44.403769  478121 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key.428f7d60
	I1227 10:20:44.403787  478121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt.428f7d60 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 10:20:44.654009  478121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt.428f7d60 ...
	I1227 10:20:44.654044  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt.428f7d60: {Name:mk6a04d5e0c1ff33311fb8abd695fc81863946b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:44.654256  478121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key.428f7d60 ...
	I1227 10:20:44.654271  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key.428f7d60: {Name:mke46b3df30588eb7b09514f090fda54e4c47e7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:44.654365  478121 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt.428f7d60 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt
	I1227 10:20:44.654449  478121 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key.428f7d60 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key
	I1227 10:20:44.654509  478121 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.key
	I1227 10:20:44.654526  478121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.crt with IP's: []
	I1227 10:20:45.127936  478121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.crt ...
	I1227 10:20:45.128102  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.crt: {Name:mkf1c5cd040e978426be0be9636d11e865d6dd92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:45.128349  478121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.key ...
	I1227 10:20:45.128921  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.key: {Name:mkeb178a54735cd4a541c425df0e3bfebf6e0c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:45.129107  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 10:20:45.129130  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 10:20:45.129145  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 10:20:45.129158  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 10:20:45.129170  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 10:20:45.129185  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 10:20:45.129203  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 10:20:45.129216  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 10:20:45.129290  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:20:45.129336  478121 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:20:45.129346  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:20:45.129375  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:20:45.129400  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:20:45.129423  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:20:45.129476  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:20:45.129509  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 10:20:45.129522  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 10:20:45.129533  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:20:45.130133  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:20:45.160573  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:20:45.184674  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:20:45.215598  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:20:45.245786  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 10:20:45.279677  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:20:45.317979  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:20:45.343573  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:20:45.367437  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:20:45.387119  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:20:45.409089  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:20:45.427310  478121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:20:45.441115  478121 ssh_runner.go:195] Run: openssl version
	I1227 10:20:45.447857  478121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:20:45.455304  478121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:20:45.462841  478121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:20:45.466717  478121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:20:45.466791  478121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:20:45.508030  478121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:20:45.515653  478121 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:20:45.523523  478121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:20:45.531439  478121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:20:45.539156  478121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:20:45.543062  478121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:20:45.543164  478121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:20:45.589306  478121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:20:45.596919  478121 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/299811.pem /etc/ssl/certs/51391683.0
	I1227 10:20:45.604643  478121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:20:45.612540  478121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:20:45.620269  478121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:20:45.624214  478121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:20:45.624293  478121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:20:45.666121  478121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:20:45.673885  478121 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2998112.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:20:45.681643  478121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:20:45.685563  478121 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:20:45.685615  478121 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-915850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-915850 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:20:45.685700  478121 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:20:45.685766  478121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:20:45.719008  478121 cri.go:96] found id: ""
	I1227 10:20:45.719096  478121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:20:45.729330  478121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:20:45.738669  478121 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:20:45.738748  478121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:20:45.749599  478121 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:20:45.749617  478121 kubeadm.go:158] found existing configuration files:
	
	I1227 10:20:45.749677  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:20:45.759058  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:20:45.759133  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:20:45.767606  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:20:45.779697  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:20:45.779765  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:20:45.789383  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:20:45.797519  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:20:45.797615  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:20:45.805482  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:20:45.813697  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:20:45.813798  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:20:45.821908  478121 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:20:45.934772  478121 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:20:45.935200  478121 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:20:46.023590  478121 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:24:50.682979  478121 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:24:50.683010  478121 kubeadm.go:319] 
	I1227 10:24:50.683083  478121 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:24:50.687987  478121 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:24:50.688051  478121 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:24:50.688148  478121 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:24:50.688217  478121 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:24:50.688263  478121 kubeadm.go:319] OS: Linux
	I1227 10:24:50.688317  478121 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:24:50.688373  478121 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:24:50.688427  478121 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:24:50.688482  478121 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:24:50.688538  478121 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:24:50.688595  478121 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:24:50.688648  478121 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:24:50.688704  478121 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:24:50.688758  478121 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:24:50.688839  478121 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:24:50.688944  478121 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:24:50.689040  478121 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:24:50.689110  478121 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:24:50.691953  478121 out.go:252]   - Generating certificates and keys ...
	I1227 10:24:50.692078  478121 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:24:50.692148  478121 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:24:50.692223  478121 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:24:50.692287  478121 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:24:50.692352  478121 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:24:50.692406  478121 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:24:50.692463  478121 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:24:50.692597  478121 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-915850 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:24:50.692654  478121 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:24:50.692783  478121 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-915850 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:24:50.692852  478121 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:24:50.692919  478121 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:24:50.692967  478121 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:24:50.693026  478121 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:24:50.693082  478121 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:24:50.693142  478121 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:24:50.693198  478121 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:24:50.693265  478121 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:24:50.693323  478121 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:24:50.693407  478121 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:24:50.693475  478121 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:24:50.696647  478121 out.go:252]   - Booting up control plane ...
	I1227 10:24:50.696778  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:24:50.696893  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:24:50.696978  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:24:50.697112  478121 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:24:50.697242  478121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:24:50.697361  478121 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:24:50.697455  478121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:24:50.697499  478121 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:24:50.697634  478121 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:24:50.697747  478121 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:24:50.697815  478121 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001046036s
	I1227 10:24:50.697823  478121 kubeadm.go:319] 
	I1227 10:24:50.697879  478121 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:24:50.697918  478121 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:24:50.698026  478121 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:24:50.698034  478121 kubeadm.go:319] 
	I1227 10:24:50.698138  478121 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:24:50.698174  478121 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:24:50.698209  478121 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1227 10:24:50.698340  478121 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-915850 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-915850 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001046036s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-915850 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-915850 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001046036s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:24:50.698433  478121 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1227 10:24:50.698709  478121 kubeadm.go:319] 
	I1227 10:24:51.167170  478121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:24:51.185518  478121 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:24:51.185583  478121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:24:51.197163  478121 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:24:51.197188  478121 kubeadm.go:158] found existing configuration files:
	
	I1227 10:24:51.197242  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:24:51.207119  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:24:51.207184  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:24:51.215621  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:24:51.225127  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:24:51.225192  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:24:51.235213  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:24:51.245042  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:24:51.245107  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:24:51.253351  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:24:51.262229  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:24:51.262288  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:24:51.270965  478121 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:24:51.466154  478121 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:24:51.466578  478121 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:24:51.549199  478121 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:28:53.376207  478121 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:28:53.376241  478121 kubeadm.go:319] 
	I1227 10:28:53.376363  478121 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:28:53.380700  478121 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:28:53.380772  478121 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:28:53.380862  478121 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:28:53.380917  478121 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:28:53.380951  478121 kubeadm.go:319] OS: Linux
	I1227 10:28:53.380997  478121 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:28:53.381045  478121 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:28:53.381092  478121 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:28:53.381141  478121 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:28:53.381188  478121 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:28:53.381237  478121 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:28:53.381282  478121 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:28:53.381330  478121 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:28:53.381376  478121 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:28:53.381448  478121 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:28:53.381543  478121 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:28:53.381633  478121 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:28:53.381695  478121 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:28:53.384707  478121 out.go:252]   - Generating certificates and keys ...
	I1227 10:28:53.384801  478121 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:28:53.384866  478121 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:28:53.384975  478121 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:28:53.385036  478121 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:28:53.385105  478121 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:28:53.385158  478121 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:28:53.385260  478121 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:28:53.385323  478121 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:28:53.385396  478121 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:28:53.385489  478121 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:28:53.385528  478121 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:28:53.385583  478121 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:28:53.385634  478121 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:28:53.385690  478121 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:28:53.385742  478121 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:28:53.385805  478121 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:28:53.385859  478121 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:28:53.385942  478121 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:28:53.386007  478121 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:28:53.389106  478121 out.go:252]   - Booting up control plane ...
	I1227 10:28:53.389286  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:28:53.389416  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:28:53.389492  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:28:53.389605  478121 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:28:53.389707  478121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:28:53.389818  478121 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:28:53.389915  478121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:28:53.389957  478121 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:28:53.390097  478121 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:28:53.390208  478121 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:28:53.390277  478121 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000432104s
	I1227 10:28:53.390281  478121 kubeadm.go:319] 
	I1227 10:28:53.390344  478121 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:28:53.390378  478121 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:28:53.390493  478121 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:28:53.390498  478121 kubeadm.go:319] 
	I1227 10:28:53.390609  478121 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:28:53.390643  478121 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:28:53.390676  478121 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:28:53.390739  478121 kubeadm.go:403] duration metric: took 8m7.705127798s to StartCluster
	I1227 10:28:53.390772  478121 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:28:53.390832  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:28:53.390929  478121 kubeadm.go:319] 
	I1227 10:28:53.423749  478121 cri.go:96] found id: ""
	I1227 10:28:53.423822  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.423845  478121 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:28:53.423871  478121 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:28:53.423989  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:28:53.462817  478121 cri.go:96] found id: ""
	I1227 10:28:53.462904  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.462936  478121 logs.go:284] No container was found matching "etcd"
	I1227 10:28:53.462962  478121 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:28:53.463057  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:28:53.494248  478121 cri.go:96] found id: ""
	I1227 10:28:53.494318  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.494342  478121 logs.go:284] No container was found matching "coredns"
	I1227 10:28:53.494366  478121 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:28:53.494455  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:28:53.530232  478121 cri.go:96] found id: ""
	I1227 10:28:53.530311  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.530336  478121 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:28:53.530373  478121 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:28:53.530509  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:28:53.557685  478121 cri.go:96] found id: ""
	I1227 10:28:53.557754  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.557778  478121 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:28:53.557802  478121 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:28:53.557887  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:28:53.592134  478121 cri.go:96] found id: ""
	I1227 10:28:53.592213  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.592236  478121 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:28:53.592277  478121 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:28:53.592367  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:28:53.626986  478121 cri.go:96] found id: ""
	I1227 10:28:53.627055  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.627078  478121 logs.go:284] No container was found matching "kindnet"
	I1227 10:28:53.627107  478121 logs.go:123] Gathering logs for kubelet ...
	I1227 10:28:53.627146  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:28:53.704973  478121 logs.go:123] Gathering logs for dmesg ...
	I1227 10:28:53.705012  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:28:53.723897  478121 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:28:53.723928  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:28:53.878167  478121 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:28:53.868148    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.869052    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.871356    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.871731    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.873468    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:28:53.868148    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.869052    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.871356    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.871731    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.873468    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 10:28:53.878212  478121 logs.go:123] Gathering logs for CRI-O ...
	I1227 10:28:53.878224  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 10:28:53.920808  478121 logs.go:123] Gathering logs for container status ...
	I1227 10:28:53.920845  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 10:28:53.961041  478121 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000432104s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:28:53.961118  478121 out.go:285] * 
	* 
	W1227 10:28:53.961286  478121 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000432104s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000432104s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:28:53.961308  478121 out.go:285] * 
	* 
	W1227 10:28:53.961787  478121 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:28:53.968848  478121 out.go:203] 
	W1227 10:28:53.971840  478121 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000432104s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000432104s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:28:53.971893  478121 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:28:53.971916  478121 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:28:53.975167  478121 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-915850 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-915850 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 10:28:54.446777216 +0000 UTC m=+3557.819696573
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-915850
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-915850:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea231d4ce27b73ed3f576754e3f3e4b4423fbd70760138f07ab89ed6c288d724",
	        "Created": "2025-12-27T10:20:36.838249907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478579,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:20:36.910415085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/ea231d4ce27b73ed3f576754e3f3e4b4423fbd70760138f07ab89ed6c288d724/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea231d4ce27b73ed3f576754e3f3e4b4423fbd70760138f07ab89ed6c288d724/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea231d4ce27b73ed3f576754e3f3e4b4423fbd70760138f07ab89ed6c288d724/hosts",
	        "LogPath": "/var/lib/docker/containers/ea231d4ce27b73ed3f576754e3f3e4b4423fbd70760138f07ab89ed6c288d724/ea231d4ce27b73ed3f576754e3f3e4b4423fbd70760138f07ab89ed6c288d724-json.log",
	        "Name": "/force-systemd-flag-915850",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-915850:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-915850",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ea231d4ce27b73ed3f576754e3f3e4b4423fbd70760138f07ab89ed6c288d724",
	                "LowerDir": "/var/lib/docker/overlay2/1911a3eda7ec4642618e4b625775413eb569a9db82cc2870861e9c64e1f41dd6-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1911a3eda7ec4642618e4b625775413eb569a9db82cc2870861e9c64e1f41dd6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1911a3eda7ec4642618e4b625775413eb569a9db82cc2870861e9c64e1f41dd6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1911a3eda7ec4642618e4b625775413eb569a9db82cc2870861e9c64e1f41dd6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-915850",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-915850/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-915850",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-915850",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-915850",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "62ed79de607dd304e381c8bf59f18062faeb677c3ac7ba0deffef7cf7a123319",
	            "SandboxKey": "/var/run/docker/netns/62ed79de607d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-915850": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:ef:7d:22:69:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "afe075975360eaaa40499c792107f1a4e37c2abe00930e5402926f07f5a68698",
	                    "EndpointID": "4e8025dbd1f35b8cebd78e52baa76567f8c1737b1c31bf82a019e4f0be05383e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-915850",
	                        "ea231d4ce27b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-915850 -n force-systemd-flag-915850
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-915850 -n force-systemd-flag-915850: exit status 6 (402.900314ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:28:54.868318  504641 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-915850" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-915850 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-env-193016                                                                                                                                                                                                                   │ force-systemd-env-193016     │ jenkins │ v1.37.0 │ 27 Dec 25 10:22 UTC │ 27 Dec 25 10:22 UTC │
	│ start   │ -p cert-options-810217 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ cert-options-810217 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ -p cert-options-810217 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ delete  │ -p cert-options-810217                                                                                                                                                                                                                        │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │                     │
	│ stop    │ -p old-k8s-version-482317 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-482317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:25 UTC │
	│ image   │ old-k8s-version-482317 image list --format=json                                                                                                                                                                                               │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │ 27 Dec 25 10:25 UTC │
	│ pause   │ -p old-k8s-version-482317 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │                     │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-784377 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-784377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ image   │ default-k8s-diff-port-784377 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ pause   │ -p default-k8s-diff-port-784377 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	│ ssh     │ force-systemd-flag-915850 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:28:21
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:28:21.076845  501861 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:28:21.077011  501861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:28:21.077042  501861 out.go:374] Setting ErrFile to fd 2...
	I1227 10:28:21.077063  501861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:28:21.077436  501861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:28:21.077942  501861 out.go:368] Setting JSON to false
	I1227 10:28:21.079050  501861 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7854,"bootTime":1766823447,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:28:21.079123  501861 start.go:143] virtualization:  
	I1227 10:28:21.083343  501861 out.go:179] * [embed-certs-367691] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:28:21.088107  501861 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:28:21.088183  501861 notify.go:221] Checking for updates...
	I1227 10:28:21.094748  501861 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:28:21.097862  501861 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:28:21.100982  501861 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:28:21.104059  501861 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:28:21.107061  501861 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:28:21.110564  501861 config.go:182] Loaded profile config "force-systemd-flag-915850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:28:21.110668  501861 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:28:21.138920  501861 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:28:21.139047  501861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:28:21.198326  501861 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:28:21.1888535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:28:21.198434  501861 docker.go:319] overlay module found
	I1227 10:28:21.201729  501861 out.go:179] * Using the docker driver based on user configuration
	I1227 10:28:21.204719  501861 start.go:309] selected driver: docker
	I1227 10:28:21.204739  501861 start.go:928] validating driver "docker" against <nil>
	I1227 10:28:21.204753  501861 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:28:21.205492  501861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:28:21.258069  501861 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:28:21.249183702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:28:21.258233  501861 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:28:21.258454  501861 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:28:21.261514  501861 out.go:179] * Using Docker driver with root privileges
	I1227 10:28:21.264495  501861 cni.go:84] Creating CNI manager for ""
	I1227 10:28:21.264559  501861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:28:21.264573  501861 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:28:21.264663  501861 start.go:353] cluster config:
	{Name:embed-certs-367691 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-367691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:28:21.267857  501861 out.go:179] * Starting "embed-certs-367691" primary control-plane node in "embed-certs-367691" cluster
	I1227 10:28:21.270601  501861 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:28:21.273573  501861 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:28:21.276405  501861 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:28:21.276454  501861 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:28:21.276466  501861 cache.go:65] Caching tarball of preloaded images
	I1227 10:28:21.276487  501861 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:28:21.276549  501861 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:28:21.276559  501861 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:28:21.276671  501861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/config.json ...
	I1227 10:28:21.276689  501861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/config.json: {Name:mk2d36b76208e8573b57f1e8ecc1600c84df5c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:21.295584  501861 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:28:21.295609  501861 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:28:21.295625  501861 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:28:21.295657  501861 start.go:360] acquireMachinesLock for embed-certs-367691: {Name:mkb83b0668d0dafda9600ffbecce26be02e61e8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:28:21.295766  501861 start.go:364] duration metric: took 87.041µs to acquireMachinesLock for "embed-certs-367691"
	I1227 10:28:21.295806  501861 start.go:93] Provisioning new machine with config: &{Name:embed-certs-367691 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-367691 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:28:21.295884  501861 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:28:21.299307  501861 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:28:21.299543  501861 start.go:159] libmachine.API.Create for "embed-certs-367691" (driver="docker")
	I1227 10:28:21.299581  501861 client.go:173] LocalClient.Create starting
	I1227 10:28:21.299666  501861 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:28:21.299712  501861 main.go:144] libmachine: Decoding PEM data...
	I1227 10:28:21.299732  501861 main.go:144] libmachine: Parsing certificate...
	I1227 10:28:21.299791  501861 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:28:21.299813  501861 main.go:144] libmachine: Decoding PEM data...
	I1227 10:28:21.299830  501861 main.go:144] libmachine: Parsing certificate...
	I1227 10:28:21.300218  501861 cli_runner.go:164] Run: docker network inspect embed-certs-367691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:28:21.316048  501861 cli_runner.go:211] docker network inspect embed-certs-367691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:28:21.316133  501861 network_create.go:284] running [docker network inspect embed-certs-367691] to gather additional debugging logs...
	I1227 10:28:21.316156  501861 cli_runner.go:164] Run: docker network inspect embed-certs-367691
	W1227 10:28:21.331649  501861 cli_runner.go:211] docker network inspect embed-certs-367691 returned with exit code 1
	I1227 10:28:21.331690  501861 network_create.go:287] error running [docker network inspect embed-certs-367691]: docker network inspect embed-certs-367691: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-367691 not found
	I1227 10:28:21.331703  501861 network_create.go:289] output of [docker network inspect embed-certs-367691]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-367691 not found
	
	** /stderr **
	I1227 10:28:21.331806  501861 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:28:21.348403  501861 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:28:21.348900  501861 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:28:21.349216  501861 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:28:21.349703  501861 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a54720}
	I1227 10:28:21.349729  501861 network_create.go:124] attempt to create docker network embed-certs-367691 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:28:21.349788  501861 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-367691 embed-certs-367691
	I1227 10:28:21.412882  501861 network_create.go:108] docker network embed-certs-367691 192.168.76.0/24 created
	I1227 10:28:21.412915  501861 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-367691" container
	I1227 10:28:21.412996  501861 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:28:21.429677  501861 cli_runner.go:164] Run: docker volume create embed-certs-367691 --label name.minikube.sigs.k8s.io=embed-certs-367691 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:28:21.447566  501861 oci.go:103] Successfully created a docker volume embed-certs-367691
	I1227 10:28:21.447652  501861 cli_runner.go:164] Run: docker run --rm --name embed-certs-367691-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-367691 --entrypoint /usr/bin/test -v embed-certs-367691:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:28:21.969241  501861 oci.go:107] Successfully prepared a docker volume embed-certs-367691
	I1227 10:28:21.969307  501861 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:28:21.969318  501861 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:28:21.969383  501861 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-367691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:28:25.876044  501861 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-367691:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.90661858s)
	I1227 10:28:25.876088  501861 kic.go:203] duration metric: took 3.906766339s to extract preloaded images to volume ...
	W1227 10:28:25.876227  501861 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:28:25.876347  501861 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:28:25.936278  501861 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-367691 --name embed-certs-367691 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-367691 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-367691 --network embed-certs-367691 --ip 192.168.76.2 --volume embed-certs-367691:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:28:26.250800  501861 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Running}}
	I1227 10:28:26.270772  501861 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:28:26.292146  501861 cli_runner.go:164] Run: docker exec embed-certs-367691 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:28:26.348893  501861 oci.go:144] the created container "embed-certs-367691" has a running status.
	I1227 10:28:26.348921  501861 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa...
	I1227 10:28:26.531633  501861 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:28:26.557241  501861 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:28:26.582358  501861 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:28:26.582378  501861 kic_runner.go:114] Args: [docker exec --privileged embed-certs-367691 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:28:26.647510  501861 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:28:26.669566  501861 machine.go:94] provisionDockerMachine start ...
	I1227 10:28:26.669658  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:26.690247  501861 main.go:144] libmachine: Using SSH client type: native
	I1227 10:28:26.690604  501861 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1227 10:28:26.690614  501861 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:28:26.691458  501861 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:28:29.847896  501861 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-367691
	
	I1227 10:28:29.847923  501861 ubuntu.go:182] provisioning hostname "embed-certs-367691"
	I1227 10:28:29.848009  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:29.865913  501861 main.go:144] libmachine: Using SSH client type: native
	I1227 10:28:29.866239  501861 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1227 10:28:29.866258  501861 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-367691 && echo "embed-certs-367691" | sudo tee /etc/hostname
	I1227 10:28:30.037518  501861 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-367691
	
	I1227 10:28:30.037704  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:30.073278  501861 main.go:144] libmachine: Using SSH client type: native
	I1227 10:28:30.073619  501861 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1227 10:28:30.073637  501861 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-367691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-367691/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-367691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:28:30.224547  501861 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:28:30.224579  501861 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:28:30.224650  501861 ubuntu.go:190] setting up certificates
	I1227 10:28:30.224659  501861 provision.go:84] configureAuth start
	I1227 10:28:30.224733  501861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-367691
	I1227 10:28:30.242882  501861 provision.go:143] copyHostCerts
	I1227 10:28:30.242961  501861 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:28:30.243008  501861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:28:30.243103  501861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:28:30.243245  501861 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:28:30.243256  501861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:28:30.243287  501861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:28:30.243362  501861 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:28:30.243372  501861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:28:30.243400  501861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:28:30.243460  501861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.embed-certs-367691 san=[127.0.0.1 192.168.76.2 embed-certs-367691 localhost minikube]
	I1227 10:28:30.339880  501861 provision.go:177] copyRemoteCerts
	I1227 10:28:30.339951  501861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:28:30.340035  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:30.356622  501861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:28:30.456008  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:28:30.476820  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1227 10:28:30.502967  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:28:30.530560  501861 provision.go:87] duration metric: took 305.885824ms to configureAuth
	I1227 10:28:30.530603  501861 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:28:30.530800  501861 config.go:182] Loaded profile config "embed-certs-367691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:28:30.530931  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:30.548001  501861 main.go:144] libmachine: Using SSH client type: native
	I1227 10:28:30.548316  501861 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1227 10:28:30.548338  501861 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:28:30.847573  501861 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:28:30.847601  501861 machine.go:97] duration metric: took 4.178015053s to provisionDockerMachine
	I1227 10:28:30.847613  501861 client.go:176] duration metric: took 9.548021375s to LocalClient.Create
	I1227 10:28:30.847628  501861 start.go:167] duration metric: took 9.548087157s to libmachine.API.Create "embed-certs-367691"
	I1227 10:28:30.847636  501861 start.go:293] postStartSetup for "embed-certs-367691" (driver="docker")
	I1227 10:28:30.847662  501861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:28:30.847731  501861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:28:30.847779  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:30.865486  501861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:28:30.964416  501861 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:28:30.967831  501861 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:28:30.967860  501861 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:28:30.967873  501861 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:28:30.967930  501861 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:28:30.968037  501861 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:28:30.968161  501861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:28:30.975843  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:28:30.993312  501861 start.go:296] duration metric: took 145.660623ms for postStartSetup
	I1227 10:28:30.993689  501861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-367691
	I1227 10:28:31.014125  501861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/config.json ...
	I1227 10:28:31.014439  501861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:28:31.014489  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:31.036069  501861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:28:31.133822  501861 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:28:31.139267  501861 start.go:128] duration metric: took 9.843366571s to createHost
	I1227 10:28:31.139297  501861 start.go:83] releasing machines lock for "embed-certs-367691", held for 9.843513363s
	I1227 10:28:31.139375  501861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-367691
	I1227 10:28:31.163329  501861 ssh_runner.go:195] Run: cat /version.json
	I1227 10:28:31.163374  501861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:28:31.163399  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:31.163434  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:31.183553  501861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:28:31.185550  501861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:28:31.378678  501861 ssh_runner.go:195] Run: systemctl --version
	I1227 10:28:31.385296  501861 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:28:31.421098  501861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:28:31.425591  501861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:28:31.425711  501861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:28:31.454304  501861 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:28:31.454330  501861 start.go:496] detecting cgroup driver to use...
	I1227 10:28:31.454408  501861 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:28:31.454495  501861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:28:31.472716  501861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:28:31.485517  501861 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:28:31.485605  501861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:28:31.503347  501861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:28:31.522696  501861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:28:31.650112  501861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:28:31.783474  501861 docker.go:234] disabling docker service ...
	I1227 10:28:31.783598  501861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:28:31.804555  501861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:28:31.818066  501861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:28:31.941501  501861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:28:32.094679  501861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:28:32.107604  501861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:28:32.123208  501861 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:28:32.123325  501861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:28:32.132676  501861 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:28:32.132810  501861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:28:32.142580  501861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:28:32.152836  501861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:28:32.162235  501861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:28:32.171683  501861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:28:32.180969  501861 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:28:32.195415  501861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:28:32.204403  501861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:28:32.211542  501861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:28:32.218701  501861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:28:32.324871  501861 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:28:32.495651  501861 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:28:32.495749  501861 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:28:32.499613  501861 start.go:574] Will wait 60s for crictl version
	I1227 10:28:32.499725  501861 ssh_runner.go:195] Run: which crictl
	I1227 10:28:32.503631  501861 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:28:32.529281  501861 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:28:32.529394  501861 ssh_runner.go:195] Run: crio --version
	I1227 10:28:32.558123  501861 ssh_runner.go:195] Run: crio --version
	I1227 10:28:32.592748  501861 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:28:32.595652  501861 cli_runner.go:164] Run: docker network inspect embed-certs-367691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:28:32.611894  501861 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:28:32.615924  501861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:28:32.625365  501861 kubeadm.go:884] updating cluster {Name:embed-certs-367691 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-367691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:28:32.625484  501861 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:28:32.625549  501861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:28:32.664432  501861 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:28:32.664459  501861 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:28:32.664519  501861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:28:32.688395  501861 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:28:32.688420  501861 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:28:32.688428  501861 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:28:32.688523  501861 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-367691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-367691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:28:32.688610  501861 ssh_runner.go:195] Run: crio config
	I1227 10:28:32.762798  501861 cni.go:84] Creating CNI manager for ""
	I1227 10:28:32.762878  501861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:28:32.762917  501861 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:28:32.762975  501861 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-367691 NodeName:embed-certs-367691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:28:32.763299  501861 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-367691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:28:32.763743  501861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:28:32.777721  501861 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:28:32.777796  501861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:28:32.789734  501861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 10:28:32.804851  501861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:28:32.818651  501861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1227 10:28:32.832910  501861 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:28:32.837047  501861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:28:32.846448  501861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:28:32.957723  501861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:28:32.975080  501861 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691 for IP: 192.168.76.2
	I1227 10:28:32.975143  501861 certs.go:195] generating shared ca certs ...
	I1227 10:28:32.975176  501861 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:32.975359  501861 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:28:32.975433  501861 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:28:32.975458  501861 certs.go:257] generating profile certs ...
	I1227 10:28:32.975528  501861 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/client.key
	I1227 10:28:32.975564  501861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/client.crt with IP's: []
	I1227 10:28:33.121428  501861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/client.crt ...
	I1227 10:28:33.121463  501861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/client.crt: {Name:mkf9a52d69972b1d437e89d95d107d893d1548cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:33.121659  501861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/client.key ...
	I1227 10:28:33.121671  501861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/client.key: {Name:mk3c40faeb0bf6a9de9f741fd8c929f3a63d7140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:33.121770  501861 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.key.b2a82a80
	I1227 10:28:33.121791  501861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.crt.b2a82a80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 10:28:33.191898  501861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.crt.b2a82a80 ...
	I1227 10:28:33.191927  501861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.crt.b2a82a80: {Name:mkbf59962567f9b5d4f141a66800a597b99bbec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:33.192092  501861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.key.b2a82a80 ...
	I1227 10:28:33.192106  501861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.key.b2a82a80: {Name:mkbf2524d944aa877a416d7bd7d45bdb1b21f3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:33.192177  501861 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.crt.b2a82a80 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.crt
	I1227 10:28:33.192263  501861 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.key.b2a82a80 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.key
	I1227 10:28:33.192330  501861 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.key
	I1227 10:28:33.192348  501861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.crt with IP's: []
	I1227 10:28:33.241538  501861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.crt ...
	I1227 10:28:33.241568  501861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.crt: {Name:mkdcb5f19c14b00cdd7569f84ad9e98b5afb78b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:33.241745  501861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.key ...
	I1227 10:28:33.241757  501861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.key: {Name:mkede2b4f197a04a9ca25b19a0541d3808e2e50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:33.241971  501861 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:28:33.242020  501861 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:28:33.242033  501861 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:28:33.242063  501861 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:28:33.242121  501861 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:28:33.242149  501861 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:28:33.242201  501861 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:28:33.242778  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:28:33.261724  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:28:33.281741  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:28:33.300836  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:28:33.319198  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 10:28:33.337833  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:28:33.357141  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:28:33.375196  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:28:33.393665  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:28:33.412125  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:28:33.430591  501861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:28:33.448786  501861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:28:33.467361  501861 ssh_runner.go:195] Run: openssl version
	I1227 10:28:33.475041  501861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:28:33.482939  501861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:28:33.490815  501861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:28:33.495397  501861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:28:33.495515  501861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:28:33.541023  501861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:28:33.548681  501861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2998112.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:28:33.556156  501861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:28:33.563400  501861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:28:33.570991  501861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:28:33.574885  501861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:28:33.574970  501861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:28:33.616129  501861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:28:33.624179  501861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:28:33.631678  501861 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:28:33.639172  501861 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:28:33.646995  501861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:28:33.650923  501861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:28:33.650992  501861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:28:33.697463  501861 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:28:33.709566  501861 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/299811.pem /etc/ssl/certs/51391683.0
	I1227 10:28:33.725747  501861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:28:33.730814  501861 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:28:33.730930  501861 kubeadm.go:401] StartCluster: {Name:embed-certs-367691 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-367691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:28:33.731094  501861 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:28:33.731209  501861 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:28:33.763266  501861 cri.go:96] found id: ""
	I1227 10:28:33.763389  501861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:28:33.775692  501861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:28:33.783664  501861 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:28:33.783760  501861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:28:33.792014  501861 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:28:33.792046  501861 kubeadm.go:158] found existing configuration files:
	
	I1227 10:28:33.792097  501861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:28:33.800197  501861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:28:33.800287  501861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:28:33.807819  501861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:28:33.815838  501861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:28:33.815951  501861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:28:33.823468  501861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:28:33.831160  501861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:28:33.831258  501861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:28:33.838935  501861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:28:33.846828  501861 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:28:33.846947  501861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:28:33.854744  501861 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:28:33.964734  501861 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:28:33.965227  501861 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:28:34.039095  501861 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:28:47.783385  501861 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:28:47.783449  501861 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:28:47.783543  501861 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:28:47.783608  501861 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:28:47.783648  501861 kubeadm.go:319] OS: Linux
	I1227 10:28:47.783721  501861 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:28:47.783775  501861 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:28:47.783828  501861 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:28:47.783881  501861 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:28:47.783932  501861 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:28:47.784021  501861 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:28:47.784070  501861 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:28:47.784123  501861 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:28:47.784171  501861 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:28:47.784282  501861 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:28:47.784446  501861 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:28:47.784545  501861 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:28:47.784613  501861 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:28:47.787517  501861 out.go:252]   - Generating certificates and keys ...
	I1227 10:28:47.787612  501861 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:28:47.787681  501861 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:28:47.787752  501861 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:28:47.787818  501861 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:28:47.787882  501861 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:28:47.787936  501861 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:28:47.788015  501861 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:28:47.788142  501861 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-367691 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:28:47.788200  501861 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:28:47.788322  501861 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-367691 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:28:47.788391  501861 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:28:47.788456  501861 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:28:47.788504  501861 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:28:47.788563  501861 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:28:47.788616  501861 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:28:47.788676  501861 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:28:47.788735  501861 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:28:47.788801  501861 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:28:47.788865  501861 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:28:47.788950  501861 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:28:47.789019  501861 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:28:47.792129  501861 out.go:252]   - Booting up control plane ...
	I1227 10:28:47.792323  501861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:28:47.792449  501861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:28:47.792554  501861 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:28:47.792713  501861 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:28:47.792841  501861 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:28:47.792969  501861 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:28:47.793059  501861 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:28:47.793106  501861 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:28:47.793259  501861 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:28:47.793394  501861 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:28:47.793465  501861 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000793159s
	I1227 10:28:47.793558  501861 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 10:28:47.793640  501861 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 10:28:47.793751  501861 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 10:28:47.793831  501861 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 10:28:47.793905  501861 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.01422746s
	I1227 10:28:47.793972  501861 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.349897125s
	I1227 10:28:47.794038  501861 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.50305987s
	I1227 10:28:47.794144  501861 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 10:28:47.794273  501861 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 10:28:47.794339  501861 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 10:28:47.794523  501861 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-367691 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 10:28:47.794580  501861 kubeadm.go:319] [bootstrap-token] Using token: lputwk.uhxiogsyrwr0ddl7
	I1227 10:28:47.797716  501861 out.go:252]   - Configuring RBAC rules ...
	I1227 10:28:47.797853  501861 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 10:28:47.797945  501861 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 10:28:47.798150  501861 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 10:28:47.798300  501861 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 10:28:47.798426  501861 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 10:28:47.798527  501861 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 10:28:47.798665  501861 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 10:28:47.798714  501861 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 10:28:47.798774  501861 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 10:28:47.798788  501861 kubeadm.go:319] 
	I1227 10:28:47.798855  501861 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 10:28:47.798871  501861 kubeadm.go:319] 
	I1227 10:28:47.798960  501861 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 10:28:47.798977  501861 kubeadm.go:319] 
	I1227 10:28:47.799014  501861 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 10:28:47.799077  501861 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 10:28:47.799129  501861 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 10:28:47.799133  501861 kubeadm.go:319] 
	I1227 10:28:47.799191  501861 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 10:28:47.799196  501861 kubeadm.go:319] 
	I1227 10:28:47.799243  501861 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 10:28:47.799247  501861 kubeadm.go:319] 
	I1227 10:28:47.799303  501861 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 10:28:47.799392  501861 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 10:28:47.799473  501861 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 10:28:47.799479  501861 kubeadm.go:319] 
	I1227 10:28:47.799563  501861 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 10:28:47.799648  501861 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 10:28:47.799653  501861 kubeadm.go:319] 
	I1227 10:28:47.799762  501861 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lputwk.uhxiogsyrwr0ddl7 \
	I1227 10:28:47.799879  501861 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 \
	I1227 10:28:47.799908  501861 kubeadm.go:319] 	--control-plane 
	I1227 10:28:47.799917  501861 kubeadm.go:319] 
	I1227 10:28:47.800270  501861 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 10:28:47.800288  501861 kubeadm.go:319] 
	I1227 10:28:47.800390  501861 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lputwk.uhxiogsyrwr0ddl7 \
	I1227 10:28:47.800509  501861 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 
	I1227 10:28:47.800533  501861 cni.go:84] Creating CNI manager for ""
	I1227 10:28:47.800544  501861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:28:47.805376  501861 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 10:28:47.808353  501861 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 10:28:47.812721  501861 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 10:28:47.812744  501861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 10:28:47.825688  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 10:28:48.147694  501861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 10:28:48.147772  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:28:48.147839  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-367691 minikube.k8s.io/updated_at=2025_12_27T10_28_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=embed-certs-367691 minikube.k8s.io/primary=true
	I1227 10:28:48.330591  501861 ops.go:34] apiserver oom_adj: -16
	I1227 10:28:48.330797  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:28:48.831067  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:28:49.331803  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:28:49.831678  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:28:50.331630  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:28:50.831132  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:28:51.331533  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:28:51.831747  501861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:28:51.933277  501861 kubeadm.go:1114] duration metric: took 3.785568077s to wait for elevateKubeSystemPrivileges
	I1227 10:28:51.933314  501861 kubeadm.go:403] duration metric: took 18.202388727s to StartCluster
	I1227 10:28:51.933333  501861 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:51.933398  501861 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:28:51.934447  501861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:51.934711  501861 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:28:51.934793  501861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:28:51.935073  501861 config.go:182] Loaded profile config "embed-certs-367691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:28:51.935113  501861 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:28:51.935177  501861 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-367691"
	I1227 10:28:51.935198  501861 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-367691"
	I1227 10:28:51.935227  501861 host.go:66] Checking if "embed-certs-367691" exists ...
	I1227 10:28:51.935684  501861 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:28:51.936210  501861 addons.go:70] Setting default-storageclass=true in profile "embed-certs-367691"
	I1227 10:28:51.936234  501861 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-367691"
	I1227 10:28:51.936513  501861 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:28:51.938248  501861 out.go:179] * Verifying Kubernetes components...
	I1227 10:28:51.942578  501861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:28:51.972788  501861 addons.go:239] Setting addon default-storageclass=true in "embed-certs-367691"
	I1227 10:28:51.972829  501861 host.go:66] Checking if "embed-certs-367691" exists ...
	I1227 10:28:51.973263  501861 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:28:51.987174  501861 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:28:51.990905  501861 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:28:51.990931  501861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:28:51.991007  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:52.013929  501861 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:28:52.013960  501861 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:28:52.014035  501861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:28:52.042114  501861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:28:52.060777  501861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:28:52.329188  501861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:28:52.354065  501861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:28:52.410138  501861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:28:52.483345  501861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:28:52.933862  501861 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 10:28:52.935951  501861 node_ready.go:35] waiting up to 6m0s for node "embed-certs-367691" to be "Ready" ...
	I1227 10:28:53.358159  501861 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1227 10:28:53.376207  478121 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:28:53.376241  478121 kubeadm.go:319] 
	I1227 10:28:53.376363  478121 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:28:53.380700  478121 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:28:53.380772  478121 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:28:53.380862  478121 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:28:53.380917  478121 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:28:53.380951  478121 kubeadm.go:319] OS: Linux
	I1227 10:28:53.380997  478121 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:28:53.381045  478121 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:28:53.381092  478121 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:28:53.381141  478121 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:28:53.381188  478121 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:28:53.381237  478121 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:28:53.381282  478121 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:28:53.381330  478121 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:28:53.381376  478121 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:28:53.381448  478121 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:28:53.381543  478121 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:28:53.381633  478121 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:28:53.381695  478121 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:28:53.384707  478121 out.go:252]   - Generating certificates and keys ...
	I1227 10:28:53.384801  478121 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:28:53.384866  478121 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:28:53.384975  478121 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:28:53.385036  478121 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:28:53.385105  478121 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:28:53.385158  478121 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:28:53.385260  478121 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:28:53.385323  478121 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:28:53.385396  478121 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:28:53.385489  478121 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:28:53.385528  478121 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:28:53.385583  478121 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:28:53.385634  478121 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:28:53.385690  478121 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:28:53.385742  478121 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:28:53.385805  478121 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:28:53.385859  478121 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:28:53.385942  478121 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:28:53.386007  478121 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:28:53.389106  478121 out.go:252]   - Booting up control plane ...
	I1227 10:28:53.389286  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:28:53.389416  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:28:53.389492  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:28:53.389605  478121 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:28:53.389707  478121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:28:53.389818  478121 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:28:53.389915  478121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:28:53.389957  478121 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:28:53.390097  478121 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:28:53.390208  478121 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:28:53.390277  478121 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000432104s
	I1227 10:28:53.390281  478121 kubeadm.go:319] 
	I1227 10:28:53.390344  478121 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:28:53.390378  478121 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:28:53.390493  478121 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:28:53.390498  478121 kubeadm.go:319] 
	I1227 10:28:53.390609  478121 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:28:53.390643  478121 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:28:53.390676  478121 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:28:53.390739  478121 kubeadm.go:403] duration metric: took 8m7.705127798s to StartCluster
	I1227 10:28:53.390772  478121 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:28:53.390832  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:28:53.390929  478121 kubeadm.go:319] 
	I1227 10:28:53.423749  478121 cri.go:96] found id: ""
	I1227 10:28:53.423822  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.423845  478121 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:28:53.423871  478121 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:28:53.423989  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:28:53.462817  478121 cri.go:96] found id: ""
	I1227 10:28:53.462904  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.462936  478121 logs.go:284] No container was found matching "etcd"
	I1227 10:28:53.462962  478121 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:28:53.463057  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:28:53.494248  478121 cri.go:96] found id: ""
	I1227 10:28:53.494318  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.494342  478121 logs.go:284] No container was found matching "coredns"
	I1227 10:28:53.494366  478121 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:28:53.494455  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:28:53.530232  478121 cri.go:96] found id: ""
	I1227 10:28:53.530311  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.530336  478121 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:28:53.530373  478121 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:28:53.530509  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:28:53.557685  478121 cri.go:96] found id: ""
	I1227 10:28:53.557754  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.557778  478121 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:28:53.557802  478121 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:28:53.557887  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:28:53.592134  478121 cri.go:96] found id: ""
	I1227 10:28:53.592213  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.592236  478121 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:28:53.592277  478121 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:28:53.592367  478121 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:28:53.626986  478121 cri.go:96] found id: ""
	I1227 10:28:53.627055  478121 logs.go:282] 0 containers: []
	W1227 10:28:53.627078  478121 logs.go:284] No container was found matching "kindnet"
	I1227 10:28:53.627107  478121 logs.go:123] Gathering logs for kubelet ...
	I1227 10:28:53.627146  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:28:53.704973  478121 logs.go:123] Gathering logs for dmesg ...
	I1227 10:28:53.705012  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:28:53.723897  478121 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:28:53.723928  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:28:53.878167  478121 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:28:53.868148    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.869052    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.871356    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.871731    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.873468    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:28:53.868148    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.869052    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.871356    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.871731    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:53.873468    4929 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 10:28:53.878212  478121 logs.go:123] Gathering logs for CRI-O ...
	I1227 10:28:53.878224  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 10:28:53.920808  478121 logs.go:123] Gathering logs for container status ...
	I1227 10:28:53.920845  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 10:28:53.961041  478121 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000432104s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:28:53.961118  478121 out.go:285] * 
	W1227 10:28:53.961286  478121 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000432104s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:28:53.961308  478121 out.go:285] * 
	W1227 10:28:53.961787  478121 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:28:53.968848  478121 out.go:203] 
	W1227 10:28:53.971840  478121 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000432104s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:28:53.971893  478121 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:28:53.971916  478121 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:28:53.975167  478121 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 10:20:43 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:43.625367875Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 27 10:20:43 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:43.625405339Z" level=info msg="Starting seccomp notifier watcher"
	Dec 27 10:20:43 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:43.625466009Z" level=info msg="Create NRI interface"
	Dec 27 10:20:43 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:43.625605744Z" level=info msg="built-in NRI default validator is disabled"
	Dec 27 10:20:43 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:43.625622375Z" level=info msg="runtime interface created"
	Dec 27 10:20:43 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:43.625638326Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 27 10:20:43 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:43.625645022Z" level=info msg="runtime interface starting up..."
	Dec 27 10:20:43 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:43.625651906Z" level=info msg="starting plugins..."
	Dec 27 10:20:43 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:43.625669687Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 10:20:43 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:43.625741072Z" level=info msg="No systemd watchdog enabled"
	Dec 27 10:20:43 force-systemd-flag-915850 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 27 10:20:46 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:46.027127766Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=10cadb36-91c9-416b-a010-fbceb8fad2dc name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:20:46 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:46.028152785Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=74ae9bf5-b377-4758-8ea6-fbc62e021980 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:20:46 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:46.028757985Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=33b333e5-f7c3-468f-9658-ad2993e0b52f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:20:46 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:46.029296771Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=39af0548-0e80-43d3-bd53-92895e2ae60d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:20:46 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:46.029770901Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=00d64d1a-861c-429e-a908-311701627bb3 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:20:46 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:46.030267226Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ea3f432c-42f6-459f-a3f0-8bf672363834 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:20:46 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:20:46.030783933Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=fca00259-5177-4f69-a356-c333c426817b name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:51 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:24:51.555180417Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=19094ef2-96b4-476f-9a17-33fb0d879d51 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:51 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:24:51.556170548Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=8d1b08be-4fbd-4433-b433-0d6f110f54ea name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:51 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:24:51.556627808Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=d0a3b25c-2b43-4564-9635-cdd9612fc7c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:51 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:24:51.557082319Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=7bb7a0c5-0cef-4e89-acae-c3812b001f2c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:51 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:24:51.557492752Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0bced6e0-224d-4bbd-9dd5-6a853e4c9a9c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:51 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:24:51.557932387Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=72751287-86ce-42a4-98fc-6d2ecad3cfc3 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:51 force-systemd-flag-915850 crio[838]: time="2025-12-27T10:24:51.558414763Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=784bb09c-f83b-4ed5-abf5-a742f505c754 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:28:55.655288    5060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:55.656196    5060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:55.657806    5060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:55.658117    5060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:28:55.659573    5060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 09:58] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	[Dec27 10:28] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:28:55 up  2:11,  0 user,  load average: 1.83, 1.74, 1.87
	Linux force-systemd-flag-915850 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 10:28:53 force-systemd-flag-915850 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:28:53 force-systemd-flag-915850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 27 10:28:53 force-systemd-flag-915850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:28:53 force-systemd-flag-915850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:28:53 force-systemd-flag-915850 kubelet[4919]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:28:53 force-systemd-flag-915850 kubelet[4919]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:28:53 force-systemd-flag-915850 kubelet[4919]: E1227 10:28:53.821852    4919 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:28:53 force-systemd-flag-915850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:28:53 force-systemd-flag-915850 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:28:54 force-systemd-flag-915850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 27 10:28:54 force-systemd-flag-915850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:28:54 force-systemd-flag-915850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:28:54 force-systemd-flag-915850 kubelet[4956]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:28:54 force-systemd-flag-915850 kubelet[4956]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:28:54 force-systemd-flag-915850 kubelet[4956]: E1227 10:28:54.546287    4956 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:28:54 force-systemd-flag-915850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:28:54 force-systemd-flag-915850 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:28:55 force-systemd-flag-915850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 27 10:28:55 force-systemd-flag-915850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:28:55 force-systemd-flag-915850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:28:55 force-systemd-flag-915850 kubelet[4980]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:28:55 force-systemd-flag-915850 kubelet[4980]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:28:55 force-systemd-flag-915850 kubelet[4980]: E1227 10:28:55.300081    4980 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:28:55 force-systemd-flag-915850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:28:55 force-systemd-flag-915850 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-915850 -n force-systemd-flag-915850
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-915850 -n force-systemd-flag-915850: exit status 6 (391.934344ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:28:56.199588  504919 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-915850" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-915850" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-915850" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-915850
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-915850: (1.999618015s)
--- FAIL: TestForceSystemdFlag (506.33s)

                                                
                                    
x
+
TestForceSystemdEnv (508.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-193016 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1227 10:14:45.804846  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-193016 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m24.830936703s)

                                                
                                                
-- stdout --
	* [force-systemd-env-193016] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-193016" primary control-plane node in "force-systemd-env-193016" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:14:31.634232  460609 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:14:31.634355  460609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:14:31.634367  460609 out.go:374] Setting ErrFile to fd 2...
	I1227 10:14:31.634372  460609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:14:31.634624  460609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:14:31.635064  460609 out.go:368] Setting JSON to false
	I1227 10:14:31.635916  460609 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7025,"bootTime":1766823447,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:14:31.636031  460609 start.go:143] virtualization:  
	I1227 10:14:31.642391  460609 out.go:179] * [force-systemd-env-193016] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:14:31.645995  460609 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:14:31.646059  460609 notify.go:221] Checking for updates...
	I1227 10:14:31.652633  460609 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:14:31.656052  460609 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:14:31.659426  460609 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:14:31.662540  460609 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:14:31.665714  460609 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1227 10:14:31.669495  460609 config.go:182] Loaded profile config "test-preload-009152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:14:31.669612  460609 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:14:31.705232  460609 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:14:31.705351  460609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:14:31.776613  460609 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:14:31.766865831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:14:31.776725  460609 docker.go:319] overlay module found
	I1227 10:14:31.780058  460609 out.go:179] * Using the docker driver based on user configuration
	I1227 10:14:31.783071  460609 start.go:309] selected driver: docker
	I1227 10:14:31.783088  460609 start.go:928] validating driver "docker" against <nil>
	I1227 10:14:31.783104  460609 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:14:31.783850  460609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:14:31.838003  460609 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:14:31.828925997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:14:31.838155  460609 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:14:31.838395  460609 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 10:14:31.841343  460609 out.go:179] * Using Docker driver with root privileges
	I1227 10:14:31.844168  460609 cni.go:84] Creating CNI manager for ""
	I1227 10:14:31.844234  460609 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:14:31.844248  460609 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:14:31.844336  460609 start.go:353] cluster config:
	{Name:force-systemd-env-193016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-193016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:14:31.847459  460609 out.go:179] * Starting "force-systemd-env-193016" primary control-plane node in "force-systemd-env-193016" cluster
	I1227 10:14:31.850360  460609 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:14:31.853277  460609 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:14:31.856256  460609 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:14:31.856290  460609 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:14:31.856339  460609 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:14:31.856350  460609 cache.go:65] Caching tarball of preloaded images
	I1227 10:14:31.856431  460609 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:14:31.856441  460609 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:14:31.856564  460609 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/config.json ...
	I1227 10:14:31.856582  460609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/config.json: {Name:mke0a1530d0cc6e38688b62b76abd172d5c8701f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:14:31.875576  460609 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:14:31.875601  460609 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:14:31.875622  460609 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:14:31.875654  460609 start.go:360] acquireMachinesLock for force-systemd-env-193016: {Name:mk04b652d5f1127046c6875bac2c4b2d585da777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:14:31.875764  460609 start.go:364] duration metric: took 87.853µs to acquireMachinesLock for "force-systemd-env-193016"
	I1227 10:14:31.875801  460609 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-193016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-193016 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:14:31.875868  460609 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:14:31.879390  460609 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:14:31.879629  460609 start.go:159] libmachine.API.Create for "force-systemd-env-193016" (driver="docker")
	I1227 10:14:31.879663  460609 client.go:173] LocalClient.Create starting
	I1227 10:14:31.879732  460609 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:14:31.879776  460609 main.go:144] libmachine: Decoding PEM data...
	I1227 10:14:31.879804  460609 main.go:144] libmachine: Parsing certificate...
	I1227 10:14:31.879861  460609 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:14:31.879884  460609 main.go:144] libmachine: Decoding PEM data...
	I1227 10:14:31.879895  460609 main.go:144] libmachine: Parsing certificate...
	I1227 10:14:31.880314  460609 cli_runner.go:164] Run: docker network inspect force-systemd-env-193016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:14:31.896035  460609 cli_runner.go:211] docker network inspect force-systemd-env-193016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:14:31.896116  460609 network_create.go:284] running [docker network inspect force-systemd-env-193016] to gather additional debugging logs...
	I1227 10:14:31.896137  460609 cli_runner.go:164] Run: docker network inspect force-systemd-env-193016
	W1227 10:14:31.911853  460609 cli_runner.go:211] docker network inspect force-systemd-env-193016 returned with exit code 1
	I1227 10:14:31.911882  460609 network_create.go:287] error running [docker network inspect force-systemd-env-193016]: docker network inspect force-systemd-env-193016: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-193016 not found
	I1227 10:14:31.911894  460609 network_create.go:289] output of [docker network inspect force-systemd-env-193016]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-193016 not found
	
	** /stderr **
	I1227 10:14:31.912042  460609 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:14:31.929155  460609 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:14:31.929583  460609 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:14:31.929912  460609 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:14:31.930405  460609 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bdf10}
	I1227 10:14:31.930428  460609 network_create.go:124] attempt to create docker network force-systemd-env-193016 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:14:31.930499  460609 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-193016 force-systemd-env-193016
	I1227 10:14:31.990461  460609 network_create.go:108] docker network force-systemd-env-193016 192.168.76.0/24 created
	I1227 10:14:31.990498  460609 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-193016" container
	I1227 10:14:31.990590  460609 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:14:32.015379  460609 cli_runner.go:164] Run: docker volume create force-systemd-env-193016 --label name.minikube.sigs.k8s.io=force-systemd-env-193016 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:14:32.047405  460609 oci.go:103] Successfully created a docker volume force-systemd-env-193016
	I1227 10:14:32.047495  460609 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-193016-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-193016 --entrypoint /usr/bin/test -v force-systemd-env-193016:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:14:32.710780  460609 oci.go:107] Successfully prepared a docker volume force-systemd-env-193016
	I1227 10:14:32.710842  460609 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:14:32.710852  460609 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:14:32.710923  460609 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-193016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:14:37.184362  460609 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-193016:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.473401753s)
	I1227 10:14:37.184391  460609 kic.go:203] duration metric: took 4.47353677s to extract preloaded images to volume ...
	W1227 10:14:37.184521  460609 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:14:37.184621  460609 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:14:37.276044  460609 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-193016 --name force-systemd-env-193016 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-193016 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-193016 --network force-systemd-env-193016 --ip 192.168.76.2 --volume force-systemd-env-193016:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:14:37.651143  460609 cli_runner.go:164] Run: docker container inspect force-systemd-env-193016 --format={{.State.Running}}
	I1227 10:14:37.675219  460609 cli_runner.go:164] Run: docker container inspect force-systemd-env-193016 --format={{.State.Status}}
	I1227 10:14:37.695230  460609 cli_runner.go:164] Run: docker exec force-systemd-env-193016 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:14:37.775689  460609 oci.go:144] the created container "force-systemd-env-193016" has a running status.
	I1227 10:14:37.775717  460609 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-env-193016/id_rsa...
	I1227 10:14:38.039033  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-env-193016/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 10:14:38.039086  460609 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-env-193016/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:14:38.077863  460609 cli_runner.go:164] Run: docker container inspect force-systemd-env-193016 --format={{.State.Status}}
	I1227 10:14:38.108698  460609 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:14:38.108719  460609 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-193016 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:14:38.221764  460609 cli_runner.go:164] Run: docker container inspect force-systemd-env-193016 --format={{.State.Status}}
	I1227 10:14:38.248879  460609 machine.go:94] provisionDockerMachine start ...
	I1227 10:14:38.248965  460609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-193016
	I1227 10:14:38.269617  460609 main.go:144] libmachine: Using SSH client type: native
	I1227 10:14:38.269949  460609 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1227 10:14:38.269958  460609 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:14:38.270760  460609 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46346->127.0.0.1:33373: read: connection reset by peer
	I1227 10:14:41.424016  460609 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-193016
	
	I1227 10:14:41.424042  460609 ubuntu.go:182] provisioning hostname "force-systemd-env-193016"
	I1227 10:14:41.424125  460609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-193016
	I1227 10:14:41.445588  460609 main.go:144] libmachine: Using SSH client type: native
	I1227 10:14:41.445914  460609 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1227 10:14:41.445932  460609 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-193016 && echo "force-systemd-env-193016" | sudo tee /etc/hostname
	I1227 10:14:41.614031  460609 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-193016
	
	I1227 10:14:41.614125  460609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-193016
	I1227 10:14:41.633680  460609 main.go:144] libmachine: Using SSH client type: native
	I1227 10:14:41.634007  460609 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1227 10:14:41.634036  460609 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-193016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-193016/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-193016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:14:41.789333  460609 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:14:41.789363  460609 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:14:41.789386  460609 ubuntu.go:190] setting up certificates
	I1227 10:14:41.789395  460609 provision.go:84] configureAuth start
	I1227 10:14:41.789455  460609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-193016
	I1227 10:14:41.822718  460609 provision.go:143] copyHostCerts
	I1227 10:14:41.822761  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:14:41.822794  460609 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:14:41.822802  460609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:14:41.822880  460609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:14:41.822955  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:14:41.822978  460609 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:14:41.822982  460609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:14:41.823007  460609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:14:41.823045  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:14:41.823074  460609 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:14:41.823079  460609 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:14:41.823101  460609 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:14:41.823166  460609 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-193016 san=[127.0.0.1 192.168.76.2 force-systemd-env-193016 localhost minikube]
	I1227 10:14:42.445088  460609 provision.go:177] copyRemoteCerts
	I1227 10:14:42.445198  460609 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:14:42.445277  460609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-193016
	I1227 10:14:42.462671  460609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-env-193016/id_rsa Username:docker}
	I1227 10:14:42.566079  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 10:14:42.566133  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1227 10:14:42.593377  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 10:14:42.593439  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:14:42.618945  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 10:14:42.619007  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:14:42.643142  460609 provision.go:87] duration metric: took 853.717234ms to configureAuth
	I1227 10:14:42.643185  460609 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:14:42.643446  460609 config.go:182] Loaded profile config "force-systemd-env-193016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:14:42.643594  460609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-193016
	I1227 10:14:42.669649  460609 main.go:144] libmachine: Using SSH client type: native
	I1227 10:14:42.669976  460609 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1227 10:14:42.669996  460609 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:14:43.077412  460609 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:14:43.077487  460609 machine.go:97] duration metric: took 4.828587818s to provisionDockerMachine
	I1227 10:14:43.077514  460609 client.go:176] duration metric: took 11.197837519s to LocalClient.Create
	I1227 10:14:43.077564  460609 start.go:167] duration metric: took 11.197919669s to libmachine.API.Create "force-systemd-env-193016"
	I1227 10:14:43.077599  460609 start.go:293] postStartSetup for "force-systemd-env-193016" (driver="docker")
	I1227 10:14:43.077642  460609 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:14:43.077739  460609 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:14:43.077813  460609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-193016
	I1227 10:14:43.095804  460609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-env-193016/id_rsa Username:docker}
	I1227 10:14:43.208404  460609 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:14:43.215609  460609 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:14:43.215659  460609 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:14:43.215685  460609 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:14:43.215745  460609 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:14:43.215860  460609 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:14:43.215871  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 10:14:43.216020  460609 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:14:43.230137  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:14:43.259613  460609 start.go:296] duration metric: took 181.983057ms for postStartSetup
	I1227 10:14:43.260053  460609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-193016
	I1227 10:14:43.295299  460609 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/config.json ...
	I1227 10:14:43.295606  460609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:14:43.295657  460609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-193016
	I1227 10:14:43.331795  460609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-env-193016/id_rsa Username:docker}
	I1227 10:14:43.437702  460609 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:14:43.445898  460609 start.go:128] duration metric: took 11.570014564s to createHost
	I1227 10:14:43.445927  460609 start.go:83] releasing machines lock for "force-systemd-env-193016", held for 11.570149269s
	I1227 10:14:43.446034  460609 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-193016
	I1227 10:14:43.479447  460609 ssh_runner.go:195] Run: cat /version.json
	I1227 10:14:43.479505  460609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-193016
	I1227 10:14:43.480285  460609 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:14:43.480371  460609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-193016
	I1227 10:14:43.512140  460609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-env-193016/id_rsa Username:docker}
	I1227 10:14:43.521842  460609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-env-193016/id_rsa Username:docker}
	I1227 10:14:43.632548  460609 ssh_runner.go:195] Run: systemctl --version
	I1227 10:14:43.765762  460609 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:14:43.833784  460609 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:14:43.841748  460609 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:14:43.841833  460609 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:14:43.906942  460609 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:14:43.906988  460609 start.go:496] detecting cgroup driver to use...
	I1227 10:14:43.907020  460609 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 10:14:43.907083  460609 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:14:43.948514  460609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:14:43.968934  460609 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:14:43.969014  460609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:14:43.994568  460609 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:14:44.024235  460609 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:14:44.255557  460609 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:14:44.466562  460609 docker.go:234] disabling docker service ...
	I1227 10:14:44.466717  460609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:14:44.496340  460609 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:14:44.517686  460609 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:14:44.700364  460609 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:14:44.858846  460609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:14:44.872447  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:14:44.889424  460609 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:14:44.889537  460609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:14:44.900860  460609 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 10:14:44.900976  460609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:14:44.910317  460609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:14:44.923921  460609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:14:44.934041  460609 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:14:44.943174  460609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:14:44.953290  460609 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:14:44.973720  460609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:14:44.983294  460609 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:14:44.991709  460609 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:14:45.000727  460609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:14:45.194305  460609 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:14:45.419540  460609 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:14:45.419615  460609 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:14:45.423719  460609 start.go:574] Will wait 60s for crictl version
	I1227 10:14:45.423787  460609 ssh_runner.go:195] Run: which crictl
	I1227 10:14:45.427512  460609 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:14:45.455058  460609 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:14:45.455154  460609 ssh_runner.go:195] Run: crio --version
	I1227 10:14:45.490743  460609 ssh_runner.go:195] Run: crio --version
	I1227 10:14:45.523600  460609 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:14:45.526661  460609 cli_runner.go:164] Run: docker network inspect force-systemd-env-193016 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:14:45.544305  460609 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:14:45.548631  460609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:14:45.558920  460609 kubeadm.go:884] updating cluster {Name:force-systemd-env-193016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-193016 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:14:45.559048  460609 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:14:45.559105  460609 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:14:45.647354  460609 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:14:45.647381  460609 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:14:45.647442  460609 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:14:45.680129  460609 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:14:45.680160  460609 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:14:45.680175  460609 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:14:45.680288  460609 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-193016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-193016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:14:45.680381  460609 ssh_runner.go:195] Run: crio config
	I1227 10:14:45.780851  460609 cni.go:84] Creating CNI manager for ""
	I1227 10:14:45.780960  460609 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:14:45.781037  460609 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:14:45.781148  460609 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-193016 NodeName:force-systemd-env-193016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:14:45.781399  460609 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-193016"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:14:45.781572  460609 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:14:45.796333  460609 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:14:45.796519  460609 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:14:45.810572  460609 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1227 10:14:45.832697  460609 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:14:45.849226  460609 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1227 10:14:45.870147  460609 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:14:45.874289  460609 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:14:45.888315  460609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:14:46.028559  460609 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:14:46.046011  460609 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016 for IP: 192.168.76.2
	I1227 10:14:46.046047  460609 certs.go:195] generating shared ca certs ...
	I1227 10:14:46.046082  460609 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:14:46.046275  460609 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:14:46.046359  460609 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:14:46.046381  460609 certs.go:257] generating profile certs ...
	I1227 10:14:46.046458  460609 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/client.key
	I1227 10:14:46.046487  460609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/client.crt with IP's: []
	I1227 10:14:46.366540  460609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/client.crt ...
	I1227 10:14:46.366574  460609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/client.crt: {Name:mk0ae9e0945c192eb974a4f968af4805b52fcdf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:14:46.366774  460609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/client.key ...
	I1227 10:14:46.366788  460609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/client.key: {Name:mk042ca1477a84ffa5bfeaee89f51915d1693985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:14:46.366890  460609 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.key.e110c172
	I1227 10:14:46.366910  460609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.crt.e110c172 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 10:14:46.675170  460609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.crt.e110c172 ...
	I1227 10:14:46.675202  460609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.crt.e110c172: {Name:mkd3c2c22bc342c4cb0400a182ecb7e6b843561f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:14:46.675390  460609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.key.e110c172 ...
	I1227 10:14:46.675404  460609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.key.e110c172: {Name:mk349ee69f5d3e4feece6f6dfde8cbb018c62e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:14:46.675493  460609 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.crt.e110c172 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.crt
	I1227 10:14:46.675577  460609 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.key.e110c172 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.key
	I1227 10:14:46.675645  460609 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/proxy-client.key
	I1227 10:14:46.675664  460609 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/proxy-client.crt with IP's: []
	I1227 10:14:46.988845  460609 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/proxy-client.crt ...
	I1227 10:14:46.988875  460609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/proxy-client.crt: {Name:mk7149da32077511d0a2ce06e07a6873a8e0c0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:14:46.989082  460609 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/proxy-client.key ...
	I1227 10:14:46.989097  460609 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/proxy-client.key: {Name:mkb6f690840dcc7cccf1243fce5a4089f4914d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:14:46.989191  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 10:14:46.989213  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 10:14:46.989231  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 10:14:46.989244  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 10:14:46.989266  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 10:14:46.989282  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 10:14:46.989294  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 10:14:46.989307  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 10:14:46.989361  460609 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:14:46.989409  460609 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:14:46.989422  460609 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:14:46.989450  460609 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:14:46.989479  460609 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:14:46.989510  460609 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:14:46.989560  460609 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:14:46.989595  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 10:14:46.989612  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 10:14:46.989628  460609 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:14:46.990198  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:14:47.013906  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:14:47.032097  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:14:47.050387  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:14:47.068631  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 10:14:47.087076  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:14:47.105303  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:14:47.124563  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-env-193016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:14:47.143183  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:14:47.161424  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:14:47.179516  460609 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:14:47.196677  460609 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:14:47.209116  460609 ssh_runner.go:195] Run: openssl version
	I1227 10:14:47.215202  460609 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:14:47.222469  460609 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:14:47.229636  460609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:14:47.233524  460609 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:14:47.233724  460609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:14:47.275689  460609 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:14:47.283824  460609 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/299811.pem /etc/ssl/certs/51391683.0
	I1227 10:14:47.291672  460609 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:14:47.300711  460609 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:14:47.309642  460609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:14:47.315400  460609 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:14:47.315523  460609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:14:47.363792  460609 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:14:47.372724  460609 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2998112.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:14:47.380970  460609 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:14:47.392758  460609 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:14:47.409427  460609 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:14:47.414312  460609 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:14:47.414393  460609 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:14:47.460615  460609 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:14:47.470070  460609 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:14:47.481886  460609 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:14:47.487532  460609 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:14:47.487594  460609 kubeadm.go:401] StartCluster: {Name:force-systemd-env-193016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-193016 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:14:47.487665  460609 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:14:47.487736  460609 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:14:47.522722  460609 cri.go:96] found id: ""
	I1227 10:14:47.522812  460609 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:14:47.534864  460609 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:14:47.546991  460609 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:14:47.547057  460609 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:14:47.560281  460609 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:14:47.560301  460609 kubeadm.go:158] found existing configuration files:
	
	I1227 10:14:47.560376  460609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:14:47.570952  460609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:14:47.571071  460609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:14:47.581660  460609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:14:47.592362  460609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:14:47.592433  460609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:14:47.600648  460609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:14:47.611324  460609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:14:47.611386  460609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:14:47.619273  460609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:14:47.627618  460609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:14:47.627721  460609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:14:47.635765  460609 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:14:47.690870  460609 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:14:47.690965  460609 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:14:47.790988  460609 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:14:47.791069  460609 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:14:47.791106  460609 kubeadm.go:319] OS: Linux
	I1227 10:14:47.791158  460609 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:14:47.791210  460609 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:14:47.791262  460609 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:14:47.791316  460609 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:14:47.791364  460609 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:14:47.791416  460609 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:14:47.791467  460609 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:14:47.791519  460609 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:14:47.791568  460609 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:14:47.869048  460609 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:14:47.869161  460609 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:14:47.869260  460609 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:14:47.877500  460609 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:14:47.882421  460609 out.go:252]   - Generating certificates and keys ...
	I1227 10:14:47.882515  460609 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:14:47.882588  460609 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:14:48.183944  460609 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:14:48.636462  460609 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:14:48.725059  460609 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:14:49.005918  460609 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:14:49.749164  460609 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:14:49.749561  460609 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-193016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:14:49.922744  460609 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:14:49.923130  460609 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-193016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:14:50.185943  460609 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:14:50.472386  460609 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:14:51.654054  460609 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:14:51.654502  460609 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:14:51.911312  460609 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:14:52.172545  460609 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:14:52.385029  460609 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:14:52.653112  460609 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:14:52.887759  460609 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:14:52.888647  460609 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:14:52.891392  460609 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:14:52.894732  460609 out.go:252]   - Booting up control plane ...
	I1227 10:14:52.894847  460609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:14:52.894933  460609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:14:52.895006  460609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:14:52.926558  460609 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:14:52.926704  460609 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:14:52.940504  460609 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:14:52.941013  460609 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:14:52.941287  460609 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:14:53.089685  460609 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:14:53.089806  460609 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:18:53.090219  460609 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000892911s
	I1227 10:18:53.090269  460609 kubeadm.go:319] 
	I1227 10:18:53.090325  460609 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:18:53.090361  460609 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:18:53.090464  460609 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:18:53.090475  460609 kubeadm.go:319] 
	I1227 10:18:53.090574  460609 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:18:53.090609  460609 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:18:53.090651  460609 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:18:53.090659  460609 kubeadm.go:319] 
	I1227 10:18:53.094692  460609 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:18:53.095179  460609 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:18:53.095298  460609 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:18:53.095525  460609 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:18:53.095532  460609 kubeadm.go:319] 
	I1227 10:18:53.095613  460609 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:18:53.095753  460609 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-193016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-193016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000892911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-193016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-193016 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000892911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:18:53.095830  460609 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1227 10:18:53.511565  460609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:18:53.525814  460609 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:18:53.525941  460609 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:18:53.534356  460609 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:18:53.534378  460609 kubeadm.go:158] found existing configuration files:
	
	I1227 10:18:53.534431  460609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:18:53.543157  460609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:18:53.543245  460609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:18:53.551406  460609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:18:53.566378  460609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:18:53.566463  460609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:18:53.574138  460609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:18:53.582111  460609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:18:53.582173  460609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:18:53.589690  460609 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:18:53.597857  460609 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:18:53.597926  460609 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:18:53.605675  460609 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:18:53.645893  460609 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:18:53.646203  460609 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:18:53.723335  460609 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:18:53.723413  460609 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:18:53.723455  460609 kubeadm.go:319] OS: Linux
	I1227 10:18:53.723508  460609 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:18:53.723561  460609 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:18:53.723613  460609 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:18:53.723665  460609 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:18:53.723718  460609 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:18:53.723771  460609 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:18:53.723822  460609 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:18:53.723875  460609 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:18:53.723926  460609 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:18:53.794223  460609 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:18:53.794347  460609 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:18:53.794445  460609 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:18:53.802505  460609 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:18:53.807994  460609 out.go:252]   - Generating certificates and keys ...
	I1227 10:18:53.808126  460609 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:18:53.808212  460609 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:18:53.808306  460609 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:18:53.808381  460609 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:18:53.808477  460609 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:18:53.808551  460609 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:18:53.808631  460609 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:18:53.808713  460609 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:18:53.808814  460609 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:18:53.808908  460609 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:18:53.808966  460609 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:18:53.809040  460609 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:18:54.073378  460609 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:18:54.285387  460609 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:18:54.924013  460609 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:18:55.518184  460609 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:18:55.771077  460609 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:18:55.771487  460609 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:18:55.773941  460609 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:18:55.777088  460609 out.go:252]   - Booting up control plane ...
	I1227 10:18:55.777197  460609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:18:55.777288  460609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:18:55.778658  460609 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:18:55.794468  460609 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:18:55.794578  460609 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:18:55.803195  460609 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:18:55.803614  460609 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:18:55.803860  460609 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:18:55.939786  460609 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:18:55.939926  460609 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:22:55.935895  460609 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000277019s
	I1227 10:22:55.935922  460609 kubeadm.go:319] 
	I1227 10:22:55.936061  460609 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:22:55.936103  460609 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:22:55.936213  460609 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:22:55.936217  460609 kubeadm.go:319] 
	I1227 10:22:55.936321  460609 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:22:55.936352  460609 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:22:55.936383  460609 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:22:55.936387  460609 kubeadm.go:319] 
	I1227 10:22:55.941292  460609 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:22:55.941715  460609 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:22:55.941829  460609 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:22:55.942067  460609 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:22:55.942077  460609 kubeadm.go:319] 
	I1227 10:22:55.942145  460609 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:22:55.942201  460609 kubeadm.go:403] duration metric: took 8m8.454612576s to StartCluster
	I1227 10:22:55.942240  460609 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:22:55.942305  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:22:55.994037  460609 cri.go:96] found id: ""
	I1227 10:22:55.994073  460609 logs.go:282] 0 containers: []
	W1227 10:22:55.994083  460609 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:22:55.994090  460609 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:22:55.994154  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:22:56.053216  460609 cri.go:96] found id: ""
	I1227 10:22:56.053245  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.053253  460609 logs.go:284] No container was found matching "etcd"
	I1227 10:22:56.053261  460609 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:22:56.053319  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:22:56.078350  460609 cri.go:96] found id: ""
	I1227 10:22:56.078376  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.078385  460609 logs.go:284] No container was found matching "coredns"
	I1227 10:22:56.078391  460609 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:22:56.078451  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:22:56.103753  460609 cri.go:96] found id: ""
	I1227 10:22:56.103780  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.103789  460609 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:22:56.103796  460609 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:22:56.103854  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:22:56.128814  460609 cri.go:96] found id: ""
	I1227 10:22:56.128838  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.128848  460609 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:22:56.128854  460609 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:22:56.128914  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:22:56.153484  460609 cri.go:96] found id: ""
	I1227 10:22:56.153511  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.153519  460609 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:22:56.153526  460609 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:22:56.153587  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:22:56.179548  460609 cri.go:96] found id: ""
	I1227 10:22:56.179575  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.179584  460609 logs.go:284] No container was found matching "kindnet"
	I1227 10:22:56.179622  460609 logs.go:123] Gathering logs for kubelet ...
	I1227 10:22:56.179642  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:22:56.248608  460609 logs.go:123] Gathering logs for dmesg ...
	I1227 10:22:56.248646  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:22:56.265661  460609 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:22:56.265690  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:22:56.329089  460609 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:22:56.320496    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.321146    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.322876    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.323454    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.325149    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:22:56.320496    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.321146    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.322876    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.323454    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.325149    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 10:22:56.329113  460609 logs.go:123] Gathering logs for CRI-O ...
	I1227 10:22:56.329126  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 10:22:56.361345  460609 logs.go:123] Gathering logs for container status ...
	I1227 10:22:56.361380  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 10:22:56.394884  460609 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:22:56.394934  460609 out.go:285] * 
	* 
	W1227 10:22:56.394985  460609 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:22:56.395002  460609 out.go:285] * 
	* 
	W1227 10:22:56.395250  460609 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:22:56.402017  460609 out.go:203] 
	W1227 10:22:56.405804  460609 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:22:56.405853  460609 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:22:56.405880  460609 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:22:56.408914  460609 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-193016 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-27 10:22:56.464602135 +0000 UTC m=+3199.837521500
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-193016
helpers_test.go:244: (dbg) docker inspect force-systemd-env-193016:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e09d8e63f1042bb42511ef14c20c98b2854a999d4d6dd6c95ca3e47e0bc83fed",
	        "Created": "2025-12-27T10:14:37.292564182Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 461484,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:14:37.369317351Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/e09d8e63f1042bb42511ef14c20c98b2854a999d4d6dd6c95ca3e47e0bc83fed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e09d8e63f1042bb42511ef14c20c98b2854a999d4d6dd6c95ca3e47e0bc83fed/hostname",
	        "HostsPath": "/var/lib/docker/containers/e09d8e63f1042bb42511ef14c20c98b2854a999d4d6dd6c95ca3e47e0bc83fed/hosts",
	        "LogPath": "/var/lib/docker/containers/e09d8e63f1042bb42511ef14c20c98b2854a999d4d6dd6c95ca3e47e0bc83fed/e09d8e63f1042bb42511ef14c20c98b2854a999d4d6dd6c95ca3e47e0bc83fed-json.log",
	        "Name": "/force-systemd-env-193016",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-193016:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-193016",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e09d8e63f1042bb42511ef14c20c98b2854a999d4d6dd6c95ca3e47e0bc83fed",
	                "LowerDir": "/var/lib/docker/overlay2/04241ea755713aced009607c6968a92a4fda11c81b68cf3da017d1a4f10f0588-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04241ea755713aced009607c6968a92a4fda11c81b68cf3da017d1a4f10f0588/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04241ea755713aced009607c6968a92a4fda11c81b68cf3da017d1a4f10f0588/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04241ea755713aced009607c6968a92a4fda11c81b68cf3da017d1a4f10f0588/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-193016",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-193016/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-193016",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-193016",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-193016",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d3b80eb8f7a9eb484fc08027c3499d75af5d04f6400803cabe830fd2775da123",
	            "SandboxKey": "/var/run/docker/netns/d3b80eb8f7a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33373"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33374"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33377"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33375"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33376"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-193016": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:9a:ec:05:41:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9e1e2556e14b2ccb3472931befbe647faf8591c9137f72f5f93b27ceb2892ba7",
	                    "EndpointID": "16bebe1446363396231139cde3106c6d32488df9dbf117c393fc71fff34acf50",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-193016",
	                        "e09d8e63f104"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-193016 -n force-systemd-env-193016
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-193016 -n force-systemd-env-193016: exit status 6 (366.642066ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:22:56.847379  482167 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-193016" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-193016 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-785247 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl cat docker --no-pager                                                                       │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /etc/docker/daemon.json                                                                           │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo docker system info                                                                                    │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cri-dockerd --version                                                                                 │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl cat containerd --no-pager                                                                   │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /etc/containerd/config.toml                                                                       │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo containerd config dump                                                                                │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl cat crio --no-pager                                                                         │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo crio config                                                                                           │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ delete  │ -p cilium-785247                                                                                                            │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:17 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                   │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ delete  │ -p cert-expiration-528820                                                                                                   │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ start   │ -p force-systemd-flag-915850 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-915850 │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:20:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:20:31.930680  478121 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:20:31.930791  478121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:20:31.930802  478121 out.go:374] Setting ErrFile to fd 2...
	I1227 10:20:31.930808  478121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:20:31.931055  478121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:20:31.931474  478121 out.go:368] Setting JSON to false
	I1227 10:20:31.932343  478121 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7385,"bootTime":1766823447,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:20:31.932412  478121 start.go:143] virtualization:  
	I1227 10:20:31.936368  478121 out.go:179] * [force-systemd-flag-915850] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:20:31.940642  478121 notify.go:221] Checking for updates...
	I1227 10:20:31.944143  478121 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:20:31.947434  478121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:20:31.950708  478121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:20:31.953969  478121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:20:31.957084  478121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:20:31.960150  478121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:20:31.963591  478121 config.go:182] Loaded profile config "force-systemd-env-193016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:20:31.963715  478121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:20:31.994124  478121 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:20:31.994268  478121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:20:32.052863  478121 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:20:32.042998902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:20:32.052974  478121 docker.go:319] overlay module found
	I1227 10:20:32.056170  478121 out.go:179] * Using the docker driver based on user configuration
	I1227 10:20:32.058993  478121 start.go:309] selected driver: docker
	I1227 10:20:32.059011  478121 start.go:928] validating driver "docker" against <nil>
	I1227 10:20:32.059026  478121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:20:32.059808  478121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:20:32.122957  478121 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:20:32.113247523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:20:32.123179  478121 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:20:32.123450  478121 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 10:20:32.126448  478121 out.go:179] * Using Docker driver with root privileges
	I1227 10:20:32.129481  478121 cni.go:84] Creating CNI manager for ""
	I1227 10:20:32.129555  478121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:20:32.129572  478121 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:20:32.129657  478121 start.go:353] cluster config:
	{Name:force-systemd-flag-915850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-915850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:20:32.132782  478121 out.go:179] * Starting "force-systemd-flag-915850" primary control-plane node in "force-systemd-flag-915850" cluster
	I1227 10:20:32.135655  478121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:20:32.138548  478121 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:20:32.141387  478121 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:20:32.141443  478121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:20:32.141454  478121 cache.go:65] Caching tarball of preloaded images
	I1227 10:20:32.141483  478121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:20:32.141545  478121 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:20:32.141556  478121 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:20:32.141670  478121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/config.json ...
	I1227 10:20:32.141687  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/config.json: {Name:mkd19636fe146d268a0d96b5322f2c1789c1ceab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:32.166287  478121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:20:32.166316  478121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:20:32.166332  478121 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:20:32.166365  478121 start.go:360] acquireMachinesLock for force-systemd-flag-915850: {Name:mk78a9e4e2c08cc91e948e8e89883b32b257e41b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:20:32.166510  478121 start.go:364] duration metric: took 123.489µs to acquireMachinesLock for "force-systemd-flag-915850"
	I1227 10:20:32.166544  478121 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-915850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-915850 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:20:32.166616  478121 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:20:32.170129  478121 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:20:32.170385  478121 start.go:159] libmachine.API.Create for "force-systemd-flag-915850" (driver="docker")
	I1227 10:20:32.170424  478121 client.go:173] LocalClient.Create starting
	I1227 10:20:32.170498  478121 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:20:32.170543  478121 main.go:144] libmachine: Decoding PEM data...
	I1227 10:20:32.170564  478121 main.go:144] libmachine: Parsing certificate...
	I1227 10:20:32.170622  478121 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:20:32.170656  478121 main.go:144] libmachine: Decoding PEM data...
	I1227 10:20:32.170667  478121 main.go:144] libmachine: Parsing certificate...
	I1227 10:20:32.171065  478121 cli_runner.go:164] Run: docker network inspect force-systemd-flag-915850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:20:32.188748  478121 cli_runner.go:211] docker network inspect force-systemd-flag-915850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:20:32.188832  478121 network_create.go:284] running [docker network inspect force-systemd-flag-915850] to gather additional debugging logs...
	I1227 10:20:32.188857  478121 cli_runner.go:164] Run: docker network inspect force-systemd-flag-915850
	W1227 10:20:32.204458  478121 cli_runner.go:211] docker network inspect force-systemd-flag-915850 returned with exit code 1
	I1227 10:20:32.204489  478121 network_create.go:287] error running [docker network inspect force-systemd-flag-915850]: docker network inspect force-systemd-flag-915850: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-915850 not found
	I1227 10:20:32.204503  478121 network_create.go:289] output of [docker network inspect force-systemd-flag-915850]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-915850 not found
	
	** /stderr **
	I1227 10:20:32.204632  478121 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:20:32.221766  478121 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:20:32.222212  478121 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:20:32.222527  478121 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:20:32.222904  478121 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9e1e2556e14b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:a6:7b:e1:e3:10} reservation:<nil>}
	I1227 10:20:32.223395  478121 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2ba30}
	I1227 10:20:32.223418  478121 network_create.go:124] attempt to create docker network force-systemd-flag-915850 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 10:20:32.223480  478121 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-915850 force-systemd-flag-915850
	I1227 10:20:32.284343  478121 network_create.go:108] docker network force-systemd-flag-915850 192.168.85.0/24 created
	I1227 10:20:32.284388  478121 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-915850" container
	I1227 10:20:32.284464  478121 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:20:32.300734  478121 cli_runner.go:164] Run: docker volume create force-systemd-flag-915850 --label name.minikube.sigs.k8s.io=force-systemd-flag-915850 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:20:32.318657  478121 oci.go:103] Successfully created a docker volume force-systemd-flag-915850
	I1227 10:20:32.318742  478121 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-915850-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-915850 --entrypoint /usr/bin/test -v force-systemd-flag-915850:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:20:32.875591  478121 oci.go:107] Successfully prepared a docker volume force-systemd-flag-915850
	I1227 10:20:32.875665  478121 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:20:32.875677  478121 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:20:32.875757  478121 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-915850:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:20:36.766042  478121 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-915850:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.890244248s)
	I1227 10:20:36.766074  478121 kic.go:203] duration metric: took 3.890393649s to extract preloaded images to volume ...
	W1227 10:20:36.766221  478121 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:20:36.766356  478121 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:20:36.822869  478121 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-915850 --name force-systemd-flag-915850 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-915850 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-915850 --network force-systemd-flag-915850 --ip 192.168.85.2 --volume force-systemd-flag-915850:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:20:37.154443  478121 cli_runner.go:164] Run: docker container inspect force-systemd-flag-915850 --format={{.State.Running}}
	I1227 10:20:37.182402  478121 cli_runner.go:164] Run: docker container inspect force-systemd-flag-915850 --format={{.State.Status}}
	I1227 10:20:37.206113  478121 cli_runner.go:164] Run: docker exec force-systemd-flag-915850 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:20:37.263728  478121 oci.go:144] the created container "force-systemd-flag-915850" has a running status.
	I1227 10:20:37.263757  478121 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa...
	I1227 10:20:37.463439  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 10:20:37.463490  478121 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:20:37.493943  478121 cli_runner.go:164] Run: docker container inspect force-systemd-flag-915850 --format={{.State.Status}}
	I1227 10:20:37.529134  478121 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:20:37.529154  478121 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-915850 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:20:37.586308  478121 cli_runner.go:164] Run: docker container inspect force-systemd-flag-915850 --format={{.State.Status}}
	I1227 10:20:37.613588  478121 machine.go:94] provisionDockerMachine start ...
	I1227 10:20:37.613693  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:37.641839  478121 main.go:144] libmachine: Using SSH client type: native
	I1227 10:20:37.642931  478121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 10:20:37.642959  478121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:20:37.644177  478121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:20:40.788587  478121 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-915850
	
	I1227 10:20:40.788612  478121 ubuntu.go:182] provisioning hostname "force-systemd-flag-915850"
	I1227 10:20:40.788680  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:40.806854  478121 main.go:144] libmachine: Using SSH client type: native
	I1227 10:20:40.807184  478121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 10:20:40.807202  478121 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-915850 && echo "force-systemd-flag-915850" | sudo tee /etc/hostname
	I1227 10:20:40.961754  478121 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-915850
	
	I1227 10:20:40.961837  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:40.982618  478121 main.go:144] libmachine: Using SSH client type: native
	I1227 10:20:40.982938  478121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 10:20:40.982961  478121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-915850' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-915850/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-915850' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:20:41.120069  478121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:20:41.120098  478121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:20:41.120125  478121 ubuntu.go:190] setting up certificates
	I1227 10:20:41.120134  478121 provision.go:84] configureAuth start
	I1227 10:20:41.120196  478121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-915850
	I1227 10:20:41.137712  478121 provision.go:143] copyHostCerts
	I1227 10:20:41.137752  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:20:41.137784  478121 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:20:41.137800  478121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:20:41.137879  478121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:20:41.137966  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:20:41.137988  478121 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:20:41.137993  478121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:20:41.138026  478121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:20:41.138072  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:20:41.138091  478121 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:20:41.138099  478121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:20:41.138125  478121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:20:41.138182  478121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-915850 san=[127.0.0.1 192.168.85.2 force-systemd-flag-915850 localhost minikube]
	I1227 10:20:41.518101  478121 provision.go:177] copyRemoteCerts
	I1227 10:20:41.518175  478121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:20:41.518227  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:41.539095  478121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa Username:docker}
	I1227 10:20:41.639995  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 10:20:41.640057  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:20:41.658000  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 10:20:41.658067  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 10:20:41.676069  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 10:20:41.676148  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:20:41.694282  478121 provision.go:87] duration metric: took 574.131042ms to configureAuth
	I1227 10:20:41.694308  478121 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:20:41.694495  478121 config.go:182] Loaded profile config "force-systemd-flag-915850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:20:41.694611  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:41.711990  478121 main.go:144] libmachine: Using SSH client type: native
	I1227 10:20:41.712302  478121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1227 10:20:41.712319  478121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:20:41.996093  478121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:20:41.996120  478121 machine.go:97] duration metric: took 4.38249464s to provisionDockerMachine
	I1227 10:20:41.996132  478121 client.go:176] duration metric: took 9.825695738s to LocalClient.Create
	I1227 10:20:41.996175  478121 start.go:167] duration metric: took 9.825791689s to libmachine.API.Create "force-systemd-flag-915850"
	I1227 10:20:41.996196  478121 start.go:293] postStartSetup for "force-systemd-flag-915850" (driver="docker")
	I1227 10:20:41.996207  478121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:20:41.996319  478121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:20:41.996389  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:42.018453  478121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa Username:docker}
	I1227 10:20:42.122612  478121 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:20:42.126834  478121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:20:42.126865  478121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:20:42.126879  478121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:20:42.126941  478121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:20:42.127027  478121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:20:42.127034  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 10:20:42.127146  478121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:20:42.136790  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:20:42.160765  478121 start.go:296] duration metric: took 164.552396ms for postStartSetup
	I1227 10:20:42.161206  478121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-915850
	I1227 10:20:42.181132  478121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/config.json ...
	I1227 10:20:42.181481  478121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:20:42.181552  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:42.203830  478121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa Username:docker}
	I1227 10:20:42.305366  478121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:20:42.310662  478121 start.go:128] duration metric: took 10.144029225s to createHost
	I1227 10:20:42.310689  478121 start.go:83] releasing machines lock for "force-systemd-flag-915850", held for 10.144162675s
	I1227 10:20:42.310786  478121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-915850
	I1227 10:20:42.328350  478121 ssh_runner.go:195] Run: cat /version.json
	I1227 10:20:42.328404  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:42.328411  478121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:20:42.328483  478121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-915850
	I1227 10:20:42.346992  478121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa Username:docker}
	I1227 10:20:42.361586  478121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/force-systemd-flag-915850/id_rsa Username:docker}
	I1227 10:20:42.543831  478121 ssh_runner.go:195] Run: systemctl --version
	I1227 10:20:42.550509  478121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:20:42.586366  478121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:20:42.591753  478121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:20:42.591850  478121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:20:42.619730  478121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:20:42.619765  478121 start.go:496] detecting cgroup driver to use...
	I1227 10:20:42.619780  478121 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 10:20:42.619846  478121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:20:42.637649  478121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:20:42.650429  478121 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:20:42.650516  478121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:20:42.668294  478121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:20:42.687085  478121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:20:42.796482  478121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:20:42.928169  478121 docker.go:234] disabling docker service ...
	I1227 10:20:42.928302  478121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:20:42.950479  478121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:20:42.968582  478121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:20:43.105389  478121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:20:43.226117  478121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:20:43.240585  478121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:20:43.254946  478121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:20:43.255057  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.264340  478121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1227 10:20:43.264464  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.273984  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.282800  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.292034  478121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:20:43.300658  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.309373  478121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.323655  478121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:20:43.332964  478121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:20:43.341203  478121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:20:43.348813  478121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:20:43.466821  478121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:20:43.632102  478121 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:20:43.632238  478121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:20:43.636120  478121 start.go:574] Will wait 60s for crictl version
	I1227 10:20:43.636220  478121 ssh_runner.go:195] Run: which crictl
	I1227 10:20:43.639739  478121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:20:43.668484  478121 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:20:43.668595  478121 ssh_runner.go:195] Run: crio --version
	I1227 10:20:43.699006  478121 ssh_runner.go:195] Run: crio --version
	I1227 10:20:43.746306  478121 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:20:43.749234  478121 cli_runner.go:164] Run: docker network inspect force-systemd-flag-915850 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:20:43.767455  478121 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:20:43.774854  478121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:20:43.785756  478121 kubeadm.go:884] updating cluster {Name:force-systemd-flag-915850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-915850 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:20:43.785878  478121 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:20:43.785944  478121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:20:43.824116  478121 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:20:43.824141  478121 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:20:43.824203  478121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:20:43.855122  478121 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:20:43.855147  478121 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:20:43.855155  478121 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 10:20:43.855246  478121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-915850 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-915850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:20:43.855334  478121 ssh_runner.go:195] Run: crio config
	I1227 10:20:43.913068  478121 cni.go:84] Creating CNI manager for ""
	I1227 10:20:43.913159  478121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:20:43.913205  478121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:20:43.913270  478121 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-915850 NodeName:force-systemd-flag-915850 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:20:43.913565  478121 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-915850"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:20:43.913689  478121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:20:43.921731  478121 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:20:43.921863  478121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:20:43.929963  478121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1227 10:20:43.943925  478121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:20:43.956945  478121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1227 10:20:43.970469  478121 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:20:43.974056  478121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:20:43.983380  478121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:20:44.102811  478121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:20:44.118920  478121 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850 for IP: 192.168.85.2
	I1227 10:20:44.118941  478121 certs.go:195] generating shared ca certs ...
	I1227 10:20:44.118958  478121 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:44.119112  478121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:20:44.119176  478121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:20:44.119191  478121 certs.go:257] generating profile certs ...
	I1227 10:20:44.119249  478121 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.key
	I1227 10:20:44.119276  478121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.crt with IP's: []
	I1227 10:20:44.403414  478121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.crt ...
	I1227 10:20:44.403449  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.crt: {Name:mkec717e6e011496cd9c1f8bc74cfe8adde984bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:44.403657  478121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.key ...
	I1227 10:20:44.403674  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/client.key: {Name:mk65632f861bdd44283621ad64eec0c5ca7b8982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:44.403769  478121 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key.428f7d60
	I1227 10:20:44.403787  478121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt.428f7d60 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 10:20:44.654009  478121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt.428f7d60 ...
	I1227 10:20:44.654044  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt.428f7d60: {Name:mk6a04d5e0c1ff33311fb8abd695fc81863946b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:44.654256  478121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key.428f7d60 ...
	I1227 10:20:44.654271  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key.428f7d60: {Name:mke46b3df30588eb7b09514f090fda54e4c47e7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:44.654365  478121 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt.428f7d60 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt
	I1227 10:20:44.654449  478121 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key.428f7d60 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key
	I1227 10:20:44.654509  478121 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.key
	I1227 10:20:44.654526  478121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.crt with IP's: []
	I1227 10:20:45.127936  478121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.crt ...
	I1227 10:20:45.128102  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.crt: {Name:mkf1c5cd040e978426be0be9636d11e865d6dd92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:45.128349  478121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.key ...
	I1227 10:20:45.128921  478121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.key: {Name:mkeb178a54735cd4a541c425df0e3bfebf6e0c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:20:45.129107  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 10:20:45.129130  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 10:20:45.129145  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 10:20:45.129158  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 10:20:45.129170  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 10:20:45.129185  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 10:20:45.129203  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 10:20:45.129216  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 10:20:45.129290  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:20:45.129336  478121 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:20:45.129346  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:20:45.129375  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:20:45.129400  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:20:45.129423  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:20:45.129476  478121 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:20:45.129509  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 10:20:45.129522  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 10:20:45.129533  478121 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:20:45.130133  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:20:45.160573  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:20:45.184674  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:20:45.215598  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:20:45.245786  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 10:20:45.279677  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:20:45.317979  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:20:45.343573  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/force-systemd-flag-915850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:20:45.367437  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:20:45.387119  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:20:45.409089  478121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:20:45.427310  478121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:20:45.441115  478121 ssh_runner.go:195] Run: openssl version
	I1227 10:20:45.447857  478121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:20:45.455304  478121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:20:45.462841  478121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:20:45.466717  478121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:20:45.466791  478121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:20:45.508030  478121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:20:45.515653  478121 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:20:45.523523  478121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:20:45.531439  478121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:20:45.539156  478121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:20:45.543062  478121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:20:45.543164  478121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:20:45.589306  478121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:20:45.596919  478121 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/299811.pem /etc/ssl/certs/51391683.0
	I1227 10:20:45.604643  478121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:20:45.612540  478121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:20:45.620269  478121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:20:45.624214  478121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:20:45.624293  478121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:20:45.666121  478121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:20:45.673885  478121 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2998112.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:20:45.681643  478121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:20:45.685563  478121 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:20:45.685615  478121 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-915850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-915850 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:20:45.685700  478121 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:20:45.685766  478121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:20:45.719008  478121 cri.go:96] found id: ""
	I1227 10:20:45.719096  478121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:20:45.729330  478121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:20:45.738669  478121 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:20:45.738748  478121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:20:45.749599  478121 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:20:45.749617  478121 kubeadm.go:158] found existing configuration files:
	
	I1227 10:20:45.749677  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:20:45.759058  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:20:45.759133  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:20:45.767606  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:20:45.779697  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:20:45.779765  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:20:45.789383  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:20:45.797519  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:20:45.797615  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:20:45.805482  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:20:45.813697  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:20:45.813798  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:20:45.821908  478121 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:20:45.934772  478121 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:20:45.935200  478121 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:20:46.023590  478121 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:22:55.935895  460609 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000277019s
	I1227 10:22:55.935922  460609 kubeadm.go:319] 
	I1227 10:22:55.936061  460609 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:22:55.936103  460609 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:22:55.936213  460609 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:22:55.936217  460609 kubeadm.go:319] 
	I1227 10:22:55.936321  460609 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:22:55.936352  460609 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:22:55.936383  460609 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:22:55.936387  460609 kubeadm.go:319] 
	I1227 10:22:55.941292  460609 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:22:55.941715  460609 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:22:55.941829  460609 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:22:55.942067  460609 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:22:55.942077  460609 kubeadm.go:319] 
	I1227 10:22:55.942145  460609 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:22:55.942201  460609 kubeadm.go:403] duration metric: took 8m8.454612576s to StartCluster
	I1227 10:22:55.942240  460609 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:22:55.942305  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:22:55.994037  460609 cri.go:96] found id: ""
	I1227 10:22:55.994073  460609 logs.go:282] 0 containers: []
	W1227 10:22:55.994083  460609 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:22:55.994090  460609 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:22:55.994154  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:22:56.053216  460609 cri.go:96] found id: ""
	I1227 10:22:56.053245  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.053253  460609 logs.go:284] No container was found matching "etcd"
	I1227 10:22:56.053261  460609 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:22:56.053319  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:22:56.078350  460609 cri.go:96] found id: ""
	I1227 10:22:56.078376  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.078385  460609 logs.go:284] No container was found matching "coredns"
	I1227 10:22:56.078391  460609 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:22:56.078451  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:22:56.103753  460609 cri.go:96] found id: ""
	I1227 10:22:56.103780  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.103789  460609 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:22:56.103796  460609 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:22:56.103854  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:22:56.128814  460609 cri.go:96] found id: ""
	I1227 10:22:56.128838  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.128848  460609 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:22:56.128854  460609 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:22:56.128914  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:22:56.153484  460609 cri.go:96] found id: ""
	I1227 10:22:56.153511  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.153519  460609 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:22:56.153526  460609 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:22:56.153587  460609 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:22:56.179548  460609 cri.go:96] found id: ""
	I1227 10:22:56.179575  460609 logs.go:282] 0 containers: []
	W1227 10:22:56.179584  460609 logs.go:284] No container was found matching "kindnet"
	I1227 10:22:56.179622  460609 logs.go:123] Gathering logs for kubelet ...
	I1227 10:22:56.179642  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:22:56.248608  460609 logs.go:123] Gathering logs for dmesg ...
	I1227 10:22:56.248646  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:22:56.265661  460609 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:22:56.265690  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:22:56.329089  460609 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:22:56.320496    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.321146    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.322876    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.323454    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.325149    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:22:56.320496    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.321146    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.322876    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.323454    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:56.325149    4947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 10:22:56.329113  460609 logs.go:123] Gathering logs for CRI-O ...
	I1227 10:22:56.329126  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1227 10:22:56.361345  460609 logs.go:123] Gathering logs for container status ...
	I1227 10:22:56.361380  460609 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 10:22:56.394884  460609 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:22:56.394934  460609 out.go:285] * 
	W1227 10:22:56.394985  460609 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:22:56.395002  460609 out.go:285] * 
	W1227 10:22:56.395250  460609 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:22:56.402017  460609 out.go:203] 
	W1227 10:22:56.405804  460609 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:22:56.405853  460609 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:22:56.405880  460609 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:22:56.408914  460609 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 10:14:45 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:45.409071143Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 27 10:14:45 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:45.409242427Z" level=info msg="Starting seccomp notifier watcher"
	Dec 27 10:14:45 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:45.409304573Z" level=info msg="Create NRI interface"
	Dec 27 10:14:45 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:45.409396233Z" level=info msg="built-in NRI default validator is disabled"
	Dec 27 10:14:45 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:45.409410904Z" level=info msg="runtime interface created"
	Dec 27 10:14:45 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:45.409423245Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 27 10:14:45 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:45.409437202Z" level=info msg="runtime interface starting up..."
	Dec 27 10:14:45 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:45.409443832Z" level=info msg="starting plugins..."
	Dec 27 10:14:45 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:45.40945728Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 10:14:45 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:45.40952412Z" level=info msg="No systemd watchdog enabled"
	Dec 27 10:14:45 force-systemd-env-193016 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 27 10:14:47 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:47.873204411Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=2fcfa0a0-d462-48b4-b39d-7b40133844e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:14:47 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:47.873908934Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=63595d14-524b-4db1-bde7-50534ff8654c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:14:47 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:47.874499939Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=b49a938f-514b-4c40-8569-2472534ccf1c name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:14:47 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:47.875017753Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=60ebf479-c40a-415d-9f6e-45e62c9542b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:14:47 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:47.875502278Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=11b16f58-41a6-487c-a129-ee387cce3e39 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:14:47 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:47.875945794Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=16829a8d-4ff3-484c-b0f7-fcada7cf370e name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:14:47 force-systemd-env-193016 crio[839]: time="2025-12-27T10:14:47.876486091Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=89a917ed-a733-4ed5-99b0-b2ab30bd7ea8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:18:53 force-systemd-env-193016 crio[839]: time="2025-12-27T10:18:53.797918714Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=a5504674-a82d-4001-aaa0-4cfe01f00cb6 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:18:53 force-systemd-env-193016 crio[839]: time="2025-12-27T10:18:53.798792913Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=8025fb24-d751-4121-8d15-0a46af3a49f5 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:18:53 force-systemd-env-193016 crio[839]: time="2025-12-27T10:18:53.799385854Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=006dfa94-0b47-424e-94b2-e36b7ef70d49 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:18:53 force-systemd-env-193016 crio[839]: time="2025-12-27T10:18:53.799935275Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=f495a463-b4b1-4dfa-9ff1-4d06c2bc8b00 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:18:53 force-systemd-env-193016 crio[839]: time="2025-12-27T10:18:53.800661575Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0c327342-14cf-48b3-9621-bc220cbbeb19 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:18:53 force-systemd-env-193016 crio[839]: time="2025-12-27T10:18:53.801148177Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=6eb13c21-385e-42d3-a2c9-73c5aec2f786 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:18:53 force-systemd-env-193016 crio[839]: time="2025-12-27T10:18:53.801576177Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=feab0342-022e-4768-bce4-81744ce394e7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:22:57.498998    5066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:57.499918    5066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:57.501492    5066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:57.501857    5066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:22:57.504296    5066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.881821] overlayfs: idmapped layers are currently not supported
	[Dec27 09:44] overlayfs: idmapped layers are currently not supported
	[Dec27 09:45] overlayfs: idmapped layers are currently not supported
	[  +3.382865] overlayfs: idmapped layers are currently not supported
	[Dec27 09:53] overlayfs: idmapped layers are currently not supported
	[Dec27 09:57] overlayfs: idmapped layers are currently not supported
	[Dec27 09:58] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:22:57 up  2:05,  0 user,  load average: 0.43, 1.29, 1.86
	Linux force-systemd-env-193016 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 10:22:55 force-systemd-env-193016 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:22:55 force-systemd-env-193016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 638.
	Dec 27 10:22:55 force-systemd-env-193016 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:22:55 force-systemd-env-193016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:22:56 force-systemd-env-193016 kubelet[4883]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:22:56 force-systemd-env-193016 kubelet[4883]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:22:56 force-systemd-env-193016 kubelet[4883]: E1227 10:22:56.025728    4883 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:22:56 force-systemd-env-193016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:22:56 force-systemd-env-193016 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:22:56 force-systemd-env-193016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 639.
	Dec 27 10:22:56 force-systemd-env-193016 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:22:56 force-systemd-env-193016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:22:56 force-systemd-env-193016 kubelet[4973]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:22:56 force-systemd-env-193016 kubelet[4973]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:22:56 force-systemd-env-193016 kubelet[4973]: E1227 10:22:56.816592    4973 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:22:56 force-systemd-env-193016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:22:56 force-systemd-env-193016 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:22:57 force-systemd-env-193016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Dec 27 10:22:57 force-systemd-env-193016 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:22:57 force-systemd-env-193016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:22:57 force-systemd-env-193016 kubelet[5071]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:22:57 force-systemd-env-193016 kubelet[5071]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 27 10:22:57 force-systemd-env-193016 kubelet[5071]: E1227 10:22:57.560665    5071 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:22:57 force-systemd-env-193016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:22:57 force-systemd-env-193016 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-193016 -n force-systemd-env-193016
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-193016 -n force-systemd-env-193016: exit status 6 (316.662413ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:22:57.930810  482400 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-193016" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-193016" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-193016" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-193016
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-193016: (2.026124321s)
--- FAIL: TestForceSystemdEnv (508.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (487.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1227 09:46:42.760127  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:47:10.444180  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:47:15.339338  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:51:42.757189  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:52:15.339208  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-513251 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (7m48.046736436s)

                                                
                                                
-- stdout --
	* [ha-513251] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-513251" primary control-plane node in "ha-513251" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	* Enabled addons: 
	
	* Starting "ha-513251-m02" control-plane node in "ha-513251" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:45:17.780858  353683 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:45:17.781066  353683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.781099  353683 out.go:374] Setting ErrFile to fd 2...
	I1227 09:45:17.781121  353683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.781427  353683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:45:17.781839  353683 out.go:368] Setting JSON to false
	I1227 09:45:17.782724  353683 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5271,"bootTime":1766823447,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:45:17.782828  353683 start.go:143] virtualization:  
	I1227 09:45:17.786847  353683 out.go:179] * [ha-513251] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:45:17.789790  353683 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:45:17.789897  353683 notify.go:221] Checking for updates...
	I1227 09:45:17.795846  353683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:45:17.798784  353683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:17.801736  353683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 09:45:17.804638  353683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:45:17.807626  353683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:45:17.811252  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:17.811891  353683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:45:17.840112  353683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:45:17.840288  353683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:45:17.900770  353683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:45:17.89071505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:45:17.900884  353683 docker.go:319] overlay module found
	I1227 09:45:17.905637  353683 out.go:179] * Using the docker driver based on existing profile
	I1227 09:45:17.908470  353683 start.go:309] selected driver: docker
	I1227 09:45:17.908492  353683 start.go:928] validating driver "docker" against &{Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:17.908638  353683 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:45:17.908737  353683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:45:17.967550  353683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:45:17.958343241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:45:17.968010  353683 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:45:17.968048  353683 cni.go:84] Creating CNI manager for ""
	I1227 09:45:17.968104  353683 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1227 09:45:17.968157  353683 start.go:353] cluster config:
	{Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:17.971557  353683 out.go:179] * Starting "ha-513251" primary control-plane node in "ha-513251" cluster
	I1227 09:45:17.974341  353683 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:45:17.977308  353683 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:45:17.980127  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:17.980181  353683 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:45:17.980196  353683 cache.go:65] Caching tarball of preloaded images
	I1227 09:45:17.980207  353683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:45:17.980281  353683 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:45:17.980293  353683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:45:17.980447  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:18.000295  353683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:45:18.000319  353683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:45:18.000341  353683 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:45:18.000375  353683 start.go:360] acquireMachinesLock for ha-513251: {Name:mka277024f8c2226ae51cd2727a8e25e47e84998 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:45:18.000447  353683 start.go:364] duration metric: took 46.926µs to acquireMachinesLock for "ha-513251"
	I1227 09:45:18.000468  353683 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:45:18.000475  353683 fix.go:54] fixHost starting: 
	I1227 09:45:18.000773  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:18.022293  353683 fix.go:112] recreateIfNeeded on ha-513251: state=Stopped err=<nil>
	W1227 09:45:18.022327  353683 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:45:18.025796  353683 out.go:252] * Restarting existing docker container for "ha-513251" ...
	I1227 09:45:18.025962  353683 cli_runner.go:164] Run: docker start ha-513251
	I1227 09:45:18.291407  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:18.313034  353683 kic.go:430] container "ha-513251" state is running.
	I1227 09:45:18.313680  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:18.336728  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:18.337162  353683 machine.go:94] provisionDockerMachine start ...
	I1227 09:45:18.337228  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:18.363888  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:18.364313  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:18.364324  353683 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:45:18.365396  353683 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:45:21.507722  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251
	
	I1227 09:45:21.507748  353683 ubuntu.go:182] provisioning hostname "ha-513251"
	I1227 09:45:21.507813  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.525335  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:21.525658  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:21.525674  353683 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-513251 && echo "ha-513251" | sudo tee /etc/hostname
	I1227 09:45:21.674143  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251
	
	I1227 09:45:21.674300  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.692486  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:21.692814  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:21.692838  353683 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513251/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:45:21.832635  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:45:21.832681  353683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:45:21.832704  353683 ubuntu.go:190] setting up certificates
	I1227 09:45:21.832713  353683 provision.go:84] configureAuth start
	I1227 09:45:21.832776  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:21.851553  353683 provision.go:143] copyHostCerts
	I1227 09:45:21.851617  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:21.851676  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 09:45:21.851690  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:21.851770  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:45:21.851873  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:21.851904  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 09:45:21.851923  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:21.851962  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:45:21.852092  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:21.852114  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 09:45:21.852123  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:21.852155  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:45:21.852214  353683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.ha-513251 san=[127.0.0.1 192.168.49.2 ha-513251 localhost minikube]
	I1227 09:45:21.903039  353683 provision.go:177] copyRemoteCerts
	I1227 09:45:21.903143  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:45:21.903193  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.920995  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.020706  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:45:22.020772  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1227 09:45:22.040457  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:45:22.040545  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:45:22.059426  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:45:22.059522  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:45:22.078437  353683 provision.go:87] duration metric: took 245.707104ms to configureAuth
	I1227 09:45:22.078487  353683 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:45:22.078740  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:22.078852  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.097273  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:22.097592  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:22.097611  353683 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:45:22.461249  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:45:22.461332  353683 machine.go:97] duration metric: took 4.124155515s to provisionDockerMachine
	I1227 09:45:22.461358  353683 start.go:293] postStartSetup for "ha-513251" (driver="docker")
	I1227 09:45:22.461396  353683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:45:22.461505  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:45:22.461577  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.484466  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.588039  353683 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:45:22.591353  353683 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:45:22.591383  353683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:45:22.591396  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:45:22.591453  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:45:22.591540  353683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 09:45:22.591553  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 09:45:22.591653  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:45:22.599440  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:22.617415  353683 start.go:296] duration metric: took 156.015491ms for postStartSetup
	I1227 09:45:22.617497  353683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:45:22.617543  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.635627  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.733536  353683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:45:22.738441  353683 fix.go:56] duration metric: took 4.73795966s for fixHost
	I1227 09:45:22.738473  353683 start.go:83] releasing machines lock for "ha-513251", held for 4.738016497s
	I1227 09:45:22.738547  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:22.756007  353683 ssh_runner.go:195] Run: cat /version.json
	I1227 09:45:22.756077  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.756356  353683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:45:22.756411  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.775684  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.784683  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.974776  353683 ssh_runner.go:195] Run: systemctl --version
	I1227 09:45:22.981407  353683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:45:23.019688  353683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:45:23.024397  353683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:45:23.024482  353683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:45:23.033023  353683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:45:23.033048  353683 start.go:496] detecting cgroup driver to use...
	I1227 09:45:23.033080  353683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:45:23.033128  353683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:45:23.048890  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:45:23.062391  353683 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:45:23.062461  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:45:23.078874  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:45:23.092641  353683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:45:23.215628  353683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:45:23.336773  353683 docker.go:234] disabling docker service ...
	I1227 09:45:23.336856  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:45:23.351993  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:45:23.365076  353683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:45:23.486999  353683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:45:23.607630  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:45:23.621666  353683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:45:23.637617  353683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:45:23.637733  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.646729  353683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:45:23.646803  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.656407  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.665374  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.674513  353683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:45:23.682899  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.692638  353683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.701500  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.710461  353683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:45:23.718222  353683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:45:23.726035  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:23.837128  353683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:45:24.007170  353683 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:45:24.007319  353683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:45:24.014123  353683 start.go:574] Will wait 60s for crictl version
	I1227 09:45:24.014245  353683 ssh_runner.go:195] Run: which crictl
	I1227 09:45:24.033366  353683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:45:24.058444  353683 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:45:24.058524  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:45:24.087072  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:45:24.118588  353683 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:45:24.121527  353683 cli_runner.go:164] Run: docker network inspect ha-513251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:45:24.138224  353683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:45:24.142467  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:45:24.152932  353683 kubeadm.go:884] updating cluster {Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:45:24.153087  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:24.153163  353683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:45:24.188918  353683 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:45:24.188945  353683 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:45:24.189006  353683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:45:24.216272  353683 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:45:24.216301  353683 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:45:24.216314  353683 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 09:45:24.216440  353683 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-513251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:45:24.216534  353683 ssh_runner.go:195] Run: crio config
	I1227 09:45:24.292083  353683 cni.go:84] Creating CNI manager for ""
	I1227 09:45:24.292105  353683 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1227 09:45:24.292144  353683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:45:24.292181  353683 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-513251 NodeName:ha-513251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:45:24.292330  353683 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-513251"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:45:24.292352  353683 kube-vip.go:115] generating kube-vip config ...
	I1227 09:45:24.292412  353683 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 09:45:24.304778  353683 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:45:24.304912  353683 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 09:45:24.305012  353683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:45:24.312901  353683 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:45:24.312976  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 09:45:24.320559  353683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 09:45:24.334537  353683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:45:24.347371  353683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 09:45:24.360123  353683 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 09:45:24.373098  353683 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 09:45:24.376820  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:45:24.387127  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:24.503934  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:45:24.522185  353683 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251 for IP: 192.168.49.2
	I1227 09:45:24.522204  353683 certs.go:195] generating shared ca certs ...
	I1227 09:45:24.522219  353683 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.522359  353683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:45:24.522410  353683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:45:24.522417  353683 certs.go:257] generating profile certs ...
	I1227 09:45:24.522498  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key
	I1227 09:45:24.522526  353683 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14
	I1227 09:45:24.522540  353683 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1227 09:45:24.644648  353683 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 ...
	I1227 09:45:24.648971  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14: {Name:mkb5dff6e9ccf7c0fd52113e0d144d6316de11fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.649217  353683 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14 ...
	I1227 09:45:24.649259  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14: {Name:mk0fad6909993d85239fadc763725d8b8b7a440c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.649401  353683 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt
	I1227 09:45:24.649572  353683 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key
	I1227 09:45:24.649765  353683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key
	I1227 09:45:24.649810  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:45:24.649846  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:45:24.649875  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:45:24.649918  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:45:24.649950  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:45:24.649988  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:45:24.650030  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:45:24.650060  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:45:24.650137  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 09:45:24.650200  353683 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 09:45:24.650235  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:45:24.650297  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:45:24.650344  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:45:24.650434  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:45:24.650545  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:24.650616  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 09:45:24.650660  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:24.650689  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.651244  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:45:24.675286  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:45:24.694694  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:45:24.717231  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:45:24.749389  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:45:24.770851  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:45:24.790309  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:45:24.811612  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:45:24.834366  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 09:45:24.853802  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:45:24.871797  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 09:45:24.894130  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:45:24.908139  353683 ssh_runner.go:195] Run: openssl version
	I1227 09:45:24.914716  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.922797  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 09:45:24.930729  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.934601  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.934686  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.976521  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:45:24.984298  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 09:45:24.991944  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 09:45:24.999664  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.020750  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.020853  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.066886  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:45:25.076628  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.086029  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:45:25.095338  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.101041  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.101118  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.145647  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:45:25.156431  353683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:45:25.165145  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:45:25.214664  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:45:25.265928  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:45:25.352085  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:45:25.431634  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:45:25.492845  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:45:25.554400  353683 kubeadm.go:401] StartCluster: {Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:25.554601  353683 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:45:25.554705  353683 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:45:25.597558  353683 cri.go:96] found id: "7b1da10d6de7d31911e815a0a6e65bec0b462f36adac4663bcba270a51072ce3"
	I1227 09:45:25.597630  353683 cri.go:96] found id: "f69e010776644f8005f4cd92f4774d5dc92d62b50dadf798020d9d8db93f52a7"
	I1227 09:45:25.597649  353683 cri.go:96] found id: "f7e841ab1c87c3a73fb0fa9774a7d5540fae4454f87f94803231876049f07db7"
	I1227 09:45:25.597672  353683 cri.go:96] found id: "c8b5eff27c4f32b2e2d3926915d5eef69dcc564f101afeb65284237bedc9de47"
	I1227 09:45:25.597710  353683 cri.go:96] found id: "cc9aea908d640c5405a83f2749f502470c2bdf01223971af7da3ebb2588fd6ab"
	I1227 09:45:25.597733  353683 cri.go:96] found id: ""
	I1227 09:45:25.597819  353683 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:45:25.609417  353683 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:45:25Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:45:25.609569  353683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:45:25.618182  353683 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:45:25.618252  353683 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:45:25.618336  353683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:45:25.632559  353683 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:45:25.633086  353683 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-513251" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:25.633265  353683 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "ha-513251" cluster setting kubeconfig missing "ha-513251" context setting]
	I1227 09:45:25.633617  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.634243  353683 kapi.go:59] client config for ha-513251: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 09:45:25.635070  353683 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 09:45:25.635170  353683 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 09:45:25.635191  353683 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 09:45:25.635109  353683 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 09:45:25.635305  353683 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 09:45:25.635338  353683 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 09:45:25.635362  353683 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 09:45:25.635701  353683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:45:25.649140  353683 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 09:45:25.649211  353683 kubeadm.go:602] duration metric: took 30.937903ms to restartPrimaryControlPlane
	I1227 09:45:25.649235  353683 kubeadm.go:403] duration metric: took 94.844629ms to StartCluster
	I1227 09:45:25.649264  353683 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.649374  353683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:25.650129  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.650407  353683 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:45:25.650466  353683 start.go:242] waiting for startup goroutines ...
	I1227 09:45:25.650497  353683 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:45:25.651321  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:25.656567  353683 out.go:179] * Enabled addons: 
	I1227 09:45:25.659428  353683 addons.go:530] duration metric: took 8.91449ms for enable addons: enabled=[]
	I1227 09:45:25.659506  353683 start.go:247] waiting for cluster config update ...
	I1227 09:45:25.659529  353683 start.go:256] writing updated cluster config ...
	I1227 09:45:25.662807  353683 out.go:203] 
	I1227 09:45:25.666068  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:25.666232  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:25.669730  353683 out.go:179] * Starting "ha-513251-m02" control-plane node in "ha-513251" cluster
	I1227 09:45:25.672614  353683 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:45:25.675545  353683 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:45:25.678485  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:25.678506  353683 cache.go:65] Caching tarball of preloaded images
	I1227 09:45:25.678618  353683 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:45:25.678630  353683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:45:25.678752  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:25.678961  353683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:45:25.700973  353683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:45:25.701000  353683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:45:25.701015  353683 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:45:25.701040  353683 start.go:360] acquireMachinesLock for ha-513251-m02: {Name:mk859480e290b8b366277aa9ac48e168657809ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:45:25.701095  353683 start.go:364] duration metric: took 35.808µs to acquireMachinesLock for "ha-513251-m02"
	I1227 09:45:25.701120  353683 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:45:25.701128  353683 fix.go:54] fixHost starting: m02
	I1227 09:45:25.701383  353683 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:25.721891  353683 fix.go:112] recreateIfNeeded on ha-513251-m02: state=Stopped err=<nil>
	W1227 09:45:25.721916  353683 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:45:25.725291  353683 out.go:252] * Restarting existing docker container for "ha-513251-m02" ...
	I1227 09:45:25.725375  353683 cli_runner.go:164] Run: docker start ha-513251-m02
	I1227 09:45:26.149022  353683 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:26.186961  353683 kic.go:430] container "ha-513251-m02" state is running.
	I1227 09:45:26.187328  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:26.217667  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:26.217913  353683 machine.go:94] provisionDockerMachine start ...
	I1227 09:45:26.217973  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:26.245157  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:26.245467  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:26.245482  353683 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:45:26.246067  353683 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55528->127.0.0.1:33203: read: connection reset by peer
	I1227 09:45:29.476637  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251-m02
	
	I1227 09:45:29.476662  353683 ubuntu.go:182] provisioning hostname "ha-513251-m02"
	I1227 09:45:29.476730  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:29.515584  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:29.515885  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:29.515896  353683 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-513251-m02 && echo "ha-513251-m02" | sudo tee /etc/hostname
	I1227 09:45:29.753613  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251-m02
	
	I1227 09:45:29.753763  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:29.802708  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:29.803015  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:29.803031  353683 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513251-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513251-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513251-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:45:30.026916  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:45:30.027002  353683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:45:30.027040  353683 ubuntu.go:190] setting up certificates
	I1227 09:45:30.027088  353683 provision.go:84] configureAuth start
	I1227 09:45:30.027213  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:30.061355  353683 provision.go:143] copyHostCerts
	I1227 09:45:30.061395  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:30.061429  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 09:45:30.061436  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:30.061516  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:45:30.061646  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:30.061664  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 09:45:30.061668  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:30.061698  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:45:30.061741  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:30.061761  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 09:45:30.061766  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:30.061789  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:45:30.061835  353683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.ha-513251-m02 san=[127.0.0.1 192.168.49.3 ha-513251-m02 localhost minikube]
	I1227 09:45:30.366138  353683 provision.go:177] copyRemoteCerts
	I1227 09:45:30.366258  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:45:30.366380  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:30.384700  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:30.494344  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:45:30.494406  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:45:30.530895  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:45:30.530955  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 09:45:30.561682  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:45:30.561747  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:45:30.591679  353683 provision.go:87] duration metric: took 564.557502ms to configureAuth
	I1227 09:45:30.591755  353683 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:45:30.592084  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:30.592246  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:30.621605  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:30.621922  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:30.621937  353683 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:45:31.635140  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:45:31.635164  353683 machine.go:97] duration metric: took 5.417238886s to provisionDockerMachine
	I1227 09:45:31.635176  353683 start.go:293] postStartSetup for "ha-513251-m02" (driver="docker")
	I1227 09:45:31.635186  353683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:45:31.635250  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:45:31.635298  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:31.672186  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:31.803466  353683 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:45:31.807580  353683 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:45:31.807606  353683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:45:31.807617  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:45:31.807677  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:45:31.807750  353683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 09:45:31.807757  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 09:45:31.807862  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:45:31.825236  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:31.845471  353683 start.go:296] duration metric: took 210.280443ms for postStartSetup
	I1227 09:45:31.845631  353683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:45:31.845704  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:31.863181  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:31.978613  353683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:45:31.988190  353683 fix.go:56] duration metric: took 6.287056138s for fixHost
	I1227 09:45:31.988218  353683 start.go:83] releasing machines lock for "ha-513251-m02", held for 6.287109349s
	I1227 09:45:31.988301  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:32.022351  353683 out.go:179] * Found network options:
	I1227 09:45:32.025233  353683 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 09:45:32.028060  353683 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 09:45:32.028113  353683 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 09:45:32.028186  353683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:45:32.028235  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:32.028260  353683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:45:32.028315  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:32.062562  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:32.071385  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:32.418806  353683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:45:32.560316  353683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:45:32.560399  353683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:45:32.576611  353683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:45:32.576635  353683 start.go:496] detecting cgroup driver to use...
	I1227 09:45:32.576667  353683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:45:32.576717  353683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:45:32.603470  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:45:32.627343  353683 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:45:32.627407  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:45:32.650889  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:45:32.671280  353683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:45:32.901177  353683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:45:33.083402  353683 docker.go:234] disabling docker service ...
	I1227 09:45:33.083516  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:45:33.102162  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:45:33.117631  353683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:45:33.330335  353683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:45:33.571932  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:45:33.588507  353683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:45:33.603417  353683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:45:33.603487  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.613092  353683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:45:33.613161  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.622600  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.632017  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.641471  353683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:45:33.650218  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.659580  353683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.675788  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.690916  353683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:45:33.699830  353683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:45:33.710022  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:33.856695  353683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:47:04.177050  353683 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320263495s)
	I1227 09:47:04.177079  353683 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:47:04.177137  353683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:47:04.181790  353683 start.go:574] Will wait 60s for crictl version
	I1227 09:47:04.181861  353683 ssh_runner.go:195] Run: which crictl
	I1227 09:47:04.185784  353683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:47:04.214501  353683 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:47:04.214588  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:47:04.244971  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:47:04.277197  353683 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:47:04.280209  353683 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 09:47:04.283165  353683 cli_runner.go:164] Run: docker network inspect ha-513251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:47:04.300447  353683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:47:04.304396  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:47:04.314914  353683 mustload.go:66] Loading cluster: ha-513251
	I1227 09:47:04.315173  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:47:04.315461  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:47:04.333467  353683 host.go:66] Checking if "ha-513251" exists ...
	I1227 09:47:04.333753  353683 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251 for IP: 192.168.49.3
	I1227 09:47:04.333767  353683 certs.go:195] generating shared ca certs ...
	I1227 09:47:04.333782  353683 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:47:04.333906  353683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:47:04.333952  353683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:47:04.333962  353683 certs.go:257] generating profile certs ...
	I1227 09:47:04.334040  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key
	I1227 09:47:04.334105  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.2d598068
	I1227 09:47:04.334153  353683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key
	I1227 09:47:04.334168  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:47:04.334198  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:47:04.334237  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:47:04.334248  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:47:04.334259  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:47:04.334275  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:47:04.334287  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:47:04.334306  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:47:04.334366  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 09:47:04.334408  353683 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 09:47:04.334421  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:47:04.334448  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:47:04.334582  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:47:04.334618  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:47:04.334672  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:47:04.334711  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.334729  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.334741  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 09:47:04.334806  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:47:04.352745  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:47:04.444298  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 09:47:04.448354  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 09:47:04.456699  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 09:47:04.460541  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 09:47:04.469121  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 09:47:04.472996  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 09:47:04.481446  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 09:47:04.484933  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1227 09:47:04.493259  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 09:47:04.497027  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 09:47:04.505596  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 09:47:04.509294  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 09:47:04.517713  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:47:04.537012  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:47:04.556494  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:47:04.576418  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:47:04.597182  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:47:04.618229  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:47:04.641696  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:47:04.663252  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:47:04.684934  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 09:47:04.716644  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:47:04.737307  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 09:47:04.758667  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 09:47:04.773792  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 09:47:04.788292  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 09:47:04.802374  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1227 09:47:04.817583  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 09:47:04.831128  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 09:47:04.845769  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 09:47:04.860041  353683 ssh_runner.go:195] Run: openssl version
	I1227 09:47:04.866442  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.874396  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 09:47:04.882193  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.886310  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.886373  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.928354  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:47:04.936052  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.943752  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:47:04.952048  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.956067  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.956176  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.997608  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:47:05.007408  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.017602  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 09:47:05.026017  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.030271  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.030427  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.074213  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:47:05.082090  353683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:47:05.086100  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:47:05.128461  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:47:05.172974  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:47:05.215663  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:47:05.263541  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:47:05.307445  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:47:05.354461  353683 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 09:47:05.354578  353683 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-513251-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:47:05.354622  353683 kube-vip.go:115] generating kube-vip config ...
	I1227 09:47:05.354681  353683 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 09:47:05.367621  353683 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:47:05.367701  353683 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 09:47:05.367789  353683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:47:05.376110  353683 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:47:05.376225  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 09:47:05.385227  353683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 09:47:05.399058  353683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:47:05.412225  353683 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 09:47:05.433740  353683 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 09:47:05.438137  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:47:05.449160  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:47:05.584548  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:47:05.598901  353683 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:47:05.599307  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:47:05.602962  353683 out.go:179] * Verifying Kubernetes components...
	I1227 09:47:05.605544  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:47:05.743183  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:47:05.759331  353683 kapi.go:59] client config for ha-513251: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 09:47:05.759399  353683 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 09:47:05.759628  353683 node_ready.go:35] waiting up to 6m0s for node "ha-513251-m02" to be "Ready" ...
	I1227 09:47:36.941690  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:47:36.942141  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1227 09:47:39.260337  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:41.261089  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:43.760342  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:45.760773  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1227 09:48:49.689213  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:48:49.689567  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:59702->192.168.49.2:8443: read: connection reset by peer
	W1227 09:48:51.761173  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:54.260275  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:56.260764  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:58.260950  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:00.261274  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:02.761180  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:05.261164  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:07.760850  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:09.761126  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:12.261097  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1227 09:50:17.401158  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:50:17.401610  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1227 09:50:19.760255  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:21.761012  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:23.761193  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:26.260515  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:28.760208  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:30.760293  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:33.261011  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:35.760559  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:38.260275  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:40.760183  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:42.761015  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:45.260386  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:47.760256  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:50.260156  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:52.261185  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:54.760529  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:56.760914  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:58.761079  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:01.260894  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:03.261105  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:05.760186  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:07.761034  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:09.761091  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:21.261754  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": net/http: TLS handshake timeout
	W1227 09:51:31.267176  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": net/http: TLS handshake timeout
	W1227 09:51:33.760222  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:35.760997  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:37.761041  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:40.260968  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:42.261084  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:44.761080  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:47.260390  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:49.760248  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:51.760405  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:54.260216  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:56.260474  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:58.261041  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:00.760814  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:03.260223  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:05.261042  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:07.760972  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:09.761080  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:12.261019  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:14.760290  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:17.260953  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:19.261221  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:21.760250  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:23.760454  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:25.760687  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:28.260326  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:30.260374  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:32.760183  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:34.761068  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:37.261218  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:39.760434  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:41.760931  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:44.260297  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:46.260721  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:48.261137  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:50.760243  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:53.260234  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:55.261149  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:53:05.759756  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": context deadline exceeded
	I1227 09:53:05.759808  353683 node_ready.go:38] duration metric: took 6m0.000151574s for node "ha-513251-m02" to be "Ready" ...
	I1227 09:53:05.763182  353683 out.go:203] 
	W1227 09:53:05.766205  353683 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1227 09:53:05.766232  353683 out.go:285] * 
	* 
	W1227 09:53:05.766486  353683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:53:05.771303  353683 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-513251 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-513251
helpers_test.go:244: (dbg) docker inspect ha-513251:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13",
	        "Created": "2025-12-27T09:37:38.963263504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353813,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:45:18.061061871Z",
	            "FinishedAt": "2025-12-27T09:45:17.324877839Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/hosts",
	        "LogPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13-json.log",
	        "Name": "/ha-513251",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-513251:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-513251",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13",
	                "LowerDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-513251",
	                "Source": "/var/lib/docker/volumes/ha-513251/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-513251",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-513251",
	                "name.minikube.sigs.k8s.io": "ha-513251",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a36bf48a852f2142e03dad97328b97c989e14e43fba2676424d26ea683f38f8a",
	            "SandboxKey": "/var/run/docker/netns/a36bf48a852f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33198"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-513251": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:f9:a2:53:37:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b4d8553c414af9c151cf56182ba5e11cb773bee9162fafd694324331063b48e",
	                    "EndpointID": "076755f827ee23e4371e7e48c17c1b2920cab289dad51349a1a50ffb80554b20",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-513251",
	                        "bb5d0cc0ca44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-513251 -n ha-513251
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-513251 -n ha-513251: exit status 2 (17.835766561s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 logs -n 25
helpers_test.go:261: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-513251 cp ha-513251-m03:/home/docker/cp-test.txt ha-513251-m04:/home/docker/cp-test_ha-513251-m03_ha-513251-m04.txt               │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test_ha-513251-m03_ha-513251-m04.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp testdata/cp-test.txt ha-513251-m04:/home/docker/cp-test.txt                                                             │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4265014863/001/cp-test_ha-513251-m04.txt │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251:/home/docker/cp-test_ha-513251-m04_ha-513251.txt                       │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251.txt                                                 │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251-m02:/home/docker/cp-test_ha-513251-m04_ha-513251-m02.txt               │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m02 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251-m02.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251-m03:/home/docker/cp-test_ha-513251-m04_ha-513251-m03.txt               │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m03 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251-m03.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ node    │ ha-513251 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ node    │ ha-513251 node start m02 --alsologtostderr -v 5                                                                                      │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │ 27 Dec 25 09:42 UTC │
	│ node    │ ha-513251 node list --alsologtostderr -v 5                                                                                           │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │                     │
	│ stop    │ ha-513251 stop --alsologtostderr -v 5                                                                                                │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │ 27 Dec 25 09:43 UTC │
	│ start   │ ha-513251 start --wait true --alsologtostderr -v 5                                                                                   │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:43 UTC │ 27 Dec 25 09:44 UTC │
	│ node    │ ha-513251 node list --alsologtostderr -v 5                                                                                           │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │                     │
	│ node    │ ha-513251 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │ 27 Dec 25 09:44 UTC │
	│ stop    │ ha-513251 stop --alsologtostderr -v 5                                                                                                │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │ 27 Dec 25 09:45 UTC │
	│ start   │ ha-513251 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:45 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:45:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:45:17.780858  353683 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:45:17.781066  353683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.781099  353683 out.go:374] Setting ErrFile to fd 2...
	I1227 09:45:17.781121  353683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.781427  353683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:45:17.781839  353683 out.go:368] Setting JSON to false
	I1227 09:45:17.782724  353683 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5271,"bootTime":1766823447,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:45:17.782828  353683 start.go:143] virtualization:  
	I1227 09:45:17.786847  353683 out.go:179] * [ha-513251] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:45:17.789790  353683 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:45:17.789897  353683 notify.go:221] Checking for updates...
	I1227 09:45:17.795846  353683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:45:17.798784  353683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:17.801736  353683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 09:45:17.804638  353683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:45:17.807626  353683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:45:17.811252  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:17.811891  353683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:45:17.840112  353683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:45:17.840288  353683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:45:17.900770  353683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:45:17.89071505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:45:17.900884  353683 docker.go:319] overlay module found
	I1227 09:45:17.905637  353683 out.go:179] * Using the docker driver based on existing profile
	I1227 09:45:17.908470  353683 start.go:309] selected driver: docker
	I1227 09:45:17.908492  353683 start.go:928] validating driver "docker" against &{Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:17.908638  353683 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:45:17.908737  353683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:45:17.967550  353683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:45:17.958343241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:45:17.968010  353683 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:45:17.968048  353683 cni.go:84] Creating CNI manager for ""
	I1227 09:45:17.968104  353683 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1227 09:45:17.968157  353683 start.go:353] cluster config:
	{Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:17.971557  353683 out.go:179] * Starting "ha-513251" primary control-plane node in "ha-513251" cluster
	I1227 09:45:17.974341  353683 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:45:17.977308  353683 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:45:17.980127  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:17.980181  353683 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:45:17.980196  353683 cache.go:65] Caching tarball of preloaded images
	I1227 09:45:17.980207  353683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:45:17.980281  353683 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:45:17.980293  353683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:45:17.980447  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:18.000295  353683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:45:18.000319  353683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:45:18.000341  353683 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:45:18.000375  353683 start.go:360] acquireMachinesLock for ha-513251: {Name:mka277024f8c2226ae51cd2727a8e25e47e84998 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:45:18.000447  353683 start.go:364] duration metric: took 46.926µs to acquireMachinesLock for "ha-513251"
	I1227 09:45:18.000468  353683 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:45:18.000475  353683 fix.go:54] fixHost starting: 
	I1227 09:45:18.000773  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:18.022293  353683 fix.go:112] recreateIfNeeded on ha-513251: state=Stopped err=<nil>
	W1227 09:45:18.022327  353683 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:45:18.025796  353683 out.go:252] * Restarting existing docker container for "ha-513251" ...
	I1227 09:45:18.025962  353683 cli_runner.go:164] Run: docker start ha-513251
	I1227 09:45:18.291407  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:18.313034  353683 kic.go:430] container "ha-513251" state is running.
	I1227 09:45:18.313680  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:18.336728  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:18.337162  353683 machine.go:94] provisionDockerMachine start ...
	I1227 09:45:18.337228  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:18.363888  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:18.364313  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:18.364324  353683 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:45:18.365396  353683 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:45:21.507722  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251
	
	I1227 09:45:21.507748  353683 ubuntu.go:182] provisioning hostname "ha-513251"
	I1227 09:45:21.507813  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.525335  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:21.525658  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:21.525674  353683 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-513251 && echo "ha-513251" | sudo tee /etc/hostname
	I1227 09:45:21.674143  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251
	
	I1227 09:45:21.674300  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.692486  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:21.692814  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:21.692838  353683 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513251/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:45:21.832635  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:45:21.832681  353683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:45:21.832704  353683 ubuntu.go:190] setting up certificates
	I1227 09:45:21.832713  353683 provision.go:84] configureAuth start
	I1227 09:45:21.832776  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:21.851553  353683 provision.go:143] copyHostCerts
	I1227 09:45:21.851617  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:21.851676  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 09:45:21.851690  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:21.851770  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:45:21.851873  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:21.851904  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 09:45:21.851923  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:21.851962  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:45:21.852092  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:21.852114  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 09:45:21.852123  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:21.852155  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:45:21.852214  353683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.ha-513251 san=[127.0.0.1 192.168.49.2 ha-513251 localhost minikube]
	I1227 09:45:21.903039  353683 provision.go:177] copyRemoteCerts
	I1227 09:45:21.903143  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:45:21.903193  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.920995  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.020706  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:45:22.020772  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1227 09:45:22.040457  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:45:22.040545  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:45:22.059426  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:45:22.059522  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:45:22.078437  353683 provision.go:87] duration metric: took 245.707104ms to configureAuth
	I1227 09:45:22.078487  353683 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:45:22.078740  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:22.078852  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.097273  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:22.097592  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:22.097611  353683 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:45:22.461249  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:45:22.461332  353683 machine.go:97] duration metric: took 4.124155515s to provisionDockerMachine
	I1227 09:45:22.461358  353683 start.go:293] postStartSetup for "ha-513251" (driver="docker")
	I1227 09:45:22.461396  353683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:45:22.461505  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:45:22.461577  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.484466  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.588039  353683 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:45:22.591353  353683 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:45:22.591383  353683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:45:22.591396  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:45:22.591453  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:45:22.591540  353683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 09:45:22.591553  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 09:45:22.591653  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:45:22.599440  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:22.617415  353683 start.go:296] duration metric: took 156.015491ms for postStartSetup
	I1227 09:45:22.617497  353683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:45:22.617543  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.635627  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.733536  353683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:45:22.738441  353683 fix.go:56] duration metric: took 4.73795966s for fixHost
	I1227 09:45:22.738473  353683 start.go:83] releasing machines lock for "ha-513251", held for 4.738016497s
	I1227 09:45:22.738547  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:22.756007  353683 ssh_runner.go:195] Run: cat /version.json
	I1227 09:45:22.756077  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.756356  353683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:45:22.756411  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.775684  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.784683  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.974776  353683 ssh_runner.go:195] Run: systemctl --version
	I1227 09:45:22.981407  353683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:45:23.019688  353683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:45:23.024397  353683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:45:23.024482  353683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:45:23.033023  353683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:45:23.033048  353683 start.go:496] detecting cgroup driver to use...
	I1227 09:45:23.033080  353683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:45:23.033128  353683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:45:23.048890  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:45:23.062391  353683 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:45:23.062461  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:45:23.078874  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:45:23.092641  353683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:45:23.215628  353683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:45:23.336773  353683 docker.go:234] disabling docker service ...
	I1227 09:45:23.336856  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:45:23.351993  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:45:23.365076  353683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:45:23.486999  353683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:45:23.607630  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:45:23.621666  353683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:45:23.637617  353683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:45:23.637733  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.646729  353683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:45:23.646803  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.656407  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.665374  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.674513  353683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:45:23.682899  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.692638  353683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.701500  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.710461  353683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:45:23.718222  353683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:45:23.726035  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:23.837128  353683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:45:24.007170  353683 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:45:24.007319  353683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:45:24.014123  353683 start.go:574] Will wait 60s for crictl version
	I1227 09:45:24.014245  353683 ssh_runner.go:195] Run: which crictl
	I1227 09:45:24.033366  353683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:45:24.058444  353683 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:45:24.058524  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:45:24.087072  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:45:24.118588  353683 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:45:24.121527  353683 cli_runner.go:164] Run: docker network inspect ha-513251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:45:24.138224  353683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:45:24.142467  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:45:24.152932  353683 kubeadm.go:884] updating cluster {Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:45:24.153087  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:24.153163  353683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:45:24.188918  353683 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:45:24.188945  353683 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:45:24.189006  353683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:45:24.216272  353683 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:45:24.216301  353683 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:45:24.216314  353683 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 09:45:24.216440  353683 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-513251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:45:24.216534  353683 ssh_runner.go:195] Run: crio config
	I1227 09:45:24.292083  353683 cni.go:84] Creating CNI manager for ""
	I1227 09:45:24.292105  353683 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1227 09:45:24.292144  353683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:45:24.292181  353683 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-513251 NodeName:ha-513251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:45:24.292330  353683 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-513251"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:45:24.292352  353683 kube-vip.go:115] generating kube-vip config ...
	I1227 09:45:24.292412  353683 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 09:45:24.304778  353683 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:45:24.304912  353683 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 09:45:24.305012  353683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:45:24.312901  353683 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:45:24.312976  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 09:45:24.320559  353683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 09:45:24.334537  353683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:45:24.347371  353683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 09:45:24.360123  353683 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 09:45:24.373098  353683 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 09:45:24.376820  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:45:24.387127  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:24.503934  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:45:24.522185  353683 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251 for IP: 192.168.49.2
	I1227 09:45:24.522204  353683 certs.go:195] generating shared ca certs ...
	I1227 09:45:24.522219  353683 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.522359  353683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:45:24.522410  353683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:45:24.522417  353683 certs.go:257] generating profile certs ...
	I1227 09:45:24.522498  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key
	I1227 09:45:24.522526  353683 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14
	I1227 09:45:24.522540  353683 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1227 09:45:24.644648  353683 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 ...
	I1227 09:45:24.648971  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14: {Name:mkb5dff6e9ccf7c0fd52113e0d144d6316de11fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.649217  353683 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14 ...
	I1227 09:45:24.649259  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14: {Name:mk0fad6909993d85239fadc763725d8b8b7a440c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.649401  353683 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt
	I1227 09:45:24.649572  353683 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key
	I1227 09:45:24.649765  353683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key
	I1227 09:45:24.649810  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:45:24.649846  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:45:24.649875  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:45:24.649918  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:45:24.649950  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:45:24.649988  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:45:24.650030  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:45:24.650060  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:45:24.650137  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 09:45:24.650200  353683 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 09:45:24.650235  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:45:24.650297  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:45:24.650344  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:45:24.650434  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:45:24.650545  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:24.650616  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 09:45:24.650660  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:24.650689  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.651244  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:45:24.675286  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:45:24.694694  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:45:24.717231  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:45:24.749389  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:45:24.770851  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:45:24.790309  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:45:24.811612  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:45:24.834366  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 09:45:24.853802  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:45:24.871797  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 09:45:24.894130  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:45:24.908139  353683 ssh_runner.go:195] Run: openssl version
	I1227 09:45:24.914716  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.922797  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 09:45:24.930729  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.934601  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.934686  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.976521  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:45:24.984298  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 09:45:24.991944  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 09:45:24.999664  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.020750  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.020853  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.066886  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:45:25.076628  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.086029  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:45:25.095338  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.101041  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.101118  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.145647  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:45:25.156431  353683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:45:25.165145  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:45:25.214664  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:45:25.265928  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:45:25.352085  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:45:25.431634  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:45:25.492845  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:45:25.554400  353683 kubeadm.go:401] StartCluster: {Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:25.554601  353683 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:45:25.554705  353683 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:45:25.597558  353683 cri.go:96] found id: "7b1da10d6de7d31911e815a0a6e65bec0b462f36adac4663bcba270a51072ce3"
	I1227 09:45:25.597630  353683 cri.go:96] found id: "f69e010776644f8005f4cd92f4774d5dc92d62b50dadf798020d9d8db93f52a7"
	I1227 09:45:25.597649  353683 cri.go:96] found id: "f7e841ab1c87c3a73fb0fa9774a7d5540fae4454f87f94803231876049f07db7"
	I1227 09:45:25.597672  353683 cri.go:96] found id: "c8b5eff27c4f32b2e2d3926915d5eef69dcc564f101afeb65284237bedc9de47"
	I1227 09:45:25.597710  353683 cri.go:96] found id: "cc9aea908d640c5405a83f2749f502470c2bdf01223971af7da3ebb2588fd6ab"
	I1227 09:45:25.597733  353683 cri.go:96] found id: ""
	I1227 09:45:25.597819  353683 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:45:25.609417  353683 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:45:25Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:45:25.609569  353683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:45:25.618182  353683 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:45:25.618252  353683 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:45:25.618336  353683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:45:25.632559  353683 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:45:25.633086  353683 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-513251" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:25.633265  353683 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "ha-513251" cluster setting kubeconfig missing "ha-513251" context setting]
	I1227 09:45:25.633617  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.634243  353683 kapi.go:59] client config for ha-513251: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 09:45:25.635070  353683 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 09:45:25.635170  353683 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 09:45:25.635191  353683 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 09:45:25.635109  353683 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 09:45:25.635305  353683 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 09:45:25.635338  353683 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 09:45:25.635362  353683 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 09:45:25.635701  353683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:45:25.649140  353683 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 09:45:25.649211  353683 kubeadm.go:602] duration metric: took 30.937903ms to restartPrimaryControlPlane
	I1227 09:45:25.649235  353683 kubeadm.go:403] duration metric: took 94.844629ms to StartCluster
	I1227 09:45:25.649264  353683 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.649374  353683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:25.650129  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.650407  353683 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:45:25.650466  353683 start.go:242] waiting for startup goroutines ...
	I1227 09:45:25.650497  353683 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:45:25.651321  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:25.656567  353683 out.go:179] * Enabled addons: 
	I1227 09:45:25.659428  353683 addons.go:530] duration metric: took 8.91449ms for enable addons: enabled=[]
	I1227 09:45:25.659506  353683 start.go:247] waiting for cluster config update ...
	I1227 09:45:25.659529  353683 start.go:256] writing updated cluster config ...
	I1227 09:45:25.662807  353683 out.go:203] 
	I1227 09:45:25.666068  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:25.666232  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:25.669730  353683 out.go:179] * Starting "ha-513251-m02" control-plane node in "ha-513251" cluster
	I1227 09:45:25.672614  353683 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:45:25.675545  353683 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:45:25.678485  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:25.678506  353683 cache.go:65] Caching tarball of preloaded images
	I1227 09:45:25.678618  353683 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:45:25.678630  353683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:45:25.678752  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:25.678961  353683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:45:25.700973  353683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:45:25.701000  353683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:45:25.701015  353683 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:45:25.701040  353683 start.go:360] acquireMachinesLock for ha-513251-m02: {Name:mk859480e290b8b366277aa9ac48e168657809ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:45:25.701095  353683 start.go:364] duration metric: took 35.808µs to acquireMachinesLock for "ha-513251-m02"
	I1227 09:45:25.701120  353683 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:45:25.701128  353683 fix.go:54] fixHost starting: m02
	I1227 09:45:25.701383  353683 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:25.721891  353683 fix.go:112] recreateIfNeeded on ha-513251-m02: state=Stopped err=<nil>
	W1227 09:45:25.721916  353683 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:45:25.725291  353683 out.go:252] * Restarting existing docker container for "ha-513251-m02" ...
	I1227 09:45:25.725375  353683 cli_runner.go:164] Run: docker start ha-513251-m02
	I1227 09:45:26.149022  353683 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:26.186961  353683 kic.go:430] container "ha-513251-m02" state is running.
	I1227 09:45:26.187328  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:26.217667  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:26.217913  353683 machine.go:94] provisionDockerMachine start ...
	I1227 09:45:26.217973  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:26.245157  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:26.245467  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:26.245482  353683 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:45:26.246067  353683 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55528->127.0.0.1:33203: read: connection reset by peer
	I1227 09:45:29.476637  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251-m02
	
	I1227 09:45:29.476662  353683 ubuntu.go:182] provisioning hostname "ha-513251-m02"
	I1227 09:45:29.476730  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:29.515584  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:29.515885  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:29.515896  353683 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-513251-m02 && echo "ha-513251-m02" | sudo tee /etc/hostname
	I1227 09:45:29.753613  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251-m02
	
	I1227 09:45:29.753763  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:29.802708  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:29.803015  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:29.803031  353683 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513251-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513251-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513251-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:45:30.026916  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:45:30.027002  353683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:45:30.027040  353683 ubuntu.go:190] setting up certificates
	I1227 09:45:30.027088  353683 provision.go:84] configureAuth start
	I1227 09:45:30.027213  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:30.061355  353683 provision.go:143] copyHostCerts
	I1227 09:45:30.061395  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:30.061429  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 09:45:30.061436  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:30.061516  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:45:30.061646  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:30.061664  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 09:45:30.061668  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:30.061698  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:45:30.061741  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:30.061761  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 09:45:30.061766  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:30.061789  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:45:30.061835  353683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.ha-513251-m02 san=[127.0.0.1 192.168.49.3 ha-513251-m02 localhost minikube]
	I1227 09:45:30.366138  353683 provision.go:177] copyRemoteCerts
	I1227 09:45:30.366258  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:45:30.366380  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:30.384700  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:30.494344  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:45:30.494406  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:45:30.530895  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:45:30.530955  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 09:45:30.561682  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:45:30.561747  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:45:30.591679  353683 provision.go:87] duration metric: took 564.557502ms to configureAuth
	I1227 09:45:30.591755  353683 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:45:30.592084  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:30.592246  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:30.621605  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:30.621922  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:30.621937  353683 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:45:31.635140  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:45:31.635164  353683 machine.go:97] duration metric: took 5.417238886s to provisionDockerMachine
	I1227 09:45:31.635176  353683 start.go:293] postStartSetup for "ha-513251-m02" (driver="docker")
	I1227 09:45:31.635186  353683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:45:31.635250  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:45:31.635298  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:31.672186  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:31.803466  353683 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:45:31.807580  353683 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:45:31.807606  353683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:45:31.807617  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:45:31.807677  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:45:31.807750  353683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 09:45:31.807757  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 09:45:31.807862  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:45:31.825236  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:31.845471  353683 start.go:296] duration metric: took 210.280443ms for postStartSetup
	I1227 09:45:31.845631  353683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:45:31.845704  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:31.863181  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:31.978613  353683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:45:31.988190  353683 fix.go:56] duration metric: took 6.287056138s for fixHost
	I1227 09:45:31.988218  353683 start.go:83] releasing machines lock for "ha-513251-m02", held for 6.287109349s
	I1227 09:45:31.988301  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:32.022351  353683 out.go:179] * Found network options:
	I1227 09:45:32.025233  353683 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 09:45:32.028060  353683 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 09:45:32.028113  353683 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 09:45:32.028186  353683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:45:32.028235  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:32.028260  353683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:45:32.028315  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:32.062562  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:32.071385  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:32.418806  353683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:45:32.560316  353683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:45:32.560399  353683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:45:32.576611  353683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:45:32.576635  353683 start.go:496] detecting cgroup driver to use...
	I1227 09:45:32.576667  353683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:45:32.576717  353683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:45:32.603470  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:45:32.627343  353683 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:45:32.627407  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:45:32.650889  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:45:32.671280  353683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:45:32.901177  353683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:45:33.083402  353683 docker.go:234] disabling docker service ...
	I1227 09:45:33.083516  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:45:33.102162  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:45:33.117631  353683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:45:33.330335  353683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:45:33.571932  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:45:33.588507  353683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:45:33.603417  353683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:45:33.603487  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.613092  353683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:45:33.613161  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.622600  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.632017  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.641471  353683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:45:33.650218  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.659580  353683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.675788  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.690916  353683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:45:33.699830  353683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:45:33.710022  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:33.856695  353683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:47:04.177050  353683 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320263495s)
	I1227 09:47:04.177079  353683 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:47:04.177137  353683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:47:04.181790  353683 start.go:574] Will wait 60s for crictl version
	I1227 09:47:04.181861  353683 ssh_runner.go:195] Run: which crictl
	I1227 09:47:04.185784  353683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:47:04.214501  353683 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:47:04.214588  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:47:04.244971  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:47:04.277197  353683 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:47:04.280209  353683 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 09:47:04.283165  353683 cli_runner.go:164] Run: docker network inspect ha-513251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:47:04.300447  353683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:47:04.304396  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:47:04.314914  353683 mustload.go:66] Loading cluster: ha-513251
	I1227 09:47:04.315173  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:47:04.315461  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:47:04.333467  353683 host.go:66] Checking if "ha-513251" exists ...
	I1227 09:47:04.333753  353683 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251 for IP: 192.168.49.3
	I1227 09:47:04.333767  353683 certs.go:195] generating shared ca certs ...
	I1227 09:47:04.333782  353683 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:47:04.333906  353683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:47:04.333952  353683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:47:04.333962  353683 certs.go:257] generating profile certs ...
	I1227 09:47:04.334040  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key
	I1227 09:47:04.334105  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.2d598068
	I1227 09:47:04.334153  353683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key
	I1227 09:47:04.334168  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:47:04.334198  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:47:04.334237  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:47:04.334248  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:47:04.334259  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:47:04.334275  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:47:04.334287  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:47:04.334306  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:47:04.334366  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 09:47:04.334408  353683 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 09:47:04.334421  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:47:04.334448  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:47:04.334582  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:47:04.334618  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:47:04.334672  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:47:04.334711  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.334729  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.334741  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 09:47:04.334806  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:47:04.352745  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:47:04.444298  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 09:47:04.448354  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 09:47:04.456699  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 09:47:04.460541  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 09:47:04.469121  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 09:47:04.472996  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 09:47:04.481446  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 09:47:04.484933  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1227 09:47:04.493259  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 09:47:04.497027  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 09:47:04.505596  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 09:47:04.509294  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 09:47:04.517713  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:47:04.537012  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:47:04.556494  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:47:04.576418  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:47:04.597182  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:47:04.618229  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:47:04.641696  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:47:04.663252  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:47:04.684934  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 09:47:04.716644  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:47:04.737307  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 09:47:04.758667  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 09:47:04.773792  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 09:47:04.788292  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 09:47:04.802374  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1227 09:47:04.817583  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 09:47:04.831128  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 09:47:04.845769  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 09:47:04.860041  353683 ssh_runner.go:195] Run: openssl version
	I1227 09:47:04.866442  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.874396  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 09:47:04.882193  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.886310  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.886373  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.928354  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:47:04.936052  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.943752  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:47:04.952048  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.956067  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.956176  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.997608  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:47:05.007408  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.017602  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 09:47:05.026017  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.030271  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.030427  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.074213  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:47:05.082090  353683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:47:05.086100  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:47:05.128461  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:47:05.172974  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:47:05.215663  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:47:05.263541  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:47:05.307445  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:47:05.354461  353683 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 09:47:05.354578  353683 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-513251-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:47:05.354622  353683 kube-vip.go:115] generating kube-vip config ...
	I1227 09:47:05.354681  353683 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 09:47:05.367621  353683 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:47:05.367701  353683 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 09:47:05.367789  353683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:47:05.376110  353683 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:47:05.376225  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 09:47:05.385227  353683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 09:47:05.399058  353683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:47:05.412225  353683 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 09:47:05.433740  353683 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 09:47:05.438137  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:47:05.449160  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:47:05.584548  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:47:05.598901  353683 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:47:05.599307  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:47:05.602962  353683 out.go:179] * Verifying Kubernetes components...
	I1227 09:47:05.605544  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:47:05.743183  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:47:05.759331  353683 kapi.go:59] client config for ha-513251: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 09:47:05.759399  353683 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 09:47:05.759628  353683 node_ready.go:35] waiting up to 6m0s for node "ha-513251-m02" to be "Ready" ...
	I1227 09:47:36.941690  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:47:36.942141  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1227 09:47:39.260337  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:41.261089  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:43.760342  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:45.760773  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1227 09:48:49.689213  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:48:49.689567  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:59702->192.168.49.2:8443: read: connection reset by peer
	W1227 09:48:51.761173  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:54.260275  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:56.260764  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:58.260950  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:00.261274  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:02.761180  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:05.261164  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:07.760850  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:09.761126  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:12.261097  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1227 09:50:17.401158  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:50:17.401610  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1227 09:50:19.760255  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:21.761012  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:23.761193  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:26.260515  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:28.760208  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:30.760293  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:33.261011  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:35.760559  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:38.260275  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:40.760183  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:42.761015  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:45.260386  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:47.760256  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:50.260156  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:52.261185  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:54.760529  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:56.760914  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:58.761079  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:01.260894  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:03.261105  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:05.760186  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:07.761034  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:09.761091  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:21.261754  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": net/http: TLS handshake timeout
	W1227 09:51:31.267176  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": net/http: TLS handshake timeout
	W1227 09:51:33.760222  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:35.760997  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:37.761041  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:40.260968  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:42.261084  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:44.761080  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:47.260390  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:49.760248  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:51.760405  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:54.260216  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:56.260474  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:58.261041  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:00.760814  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:03.260223  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:05.261042  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:07.760972  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:09.761080  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:12.261019  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:14.760290  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:17.260953  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:19.261221  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:21.760250  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:23.760454  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:25.760687  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:28.260326  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:30.260374  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:32.760183  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:34.761068  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:37.261218  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:39.760434  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:41.760931  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:44.260297  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:46.260721  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:48.261137  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:50.760243  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:53.260234  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:55.261149  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:53:05.759756  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": context deadline exceeded
	I1227 09:53:05.759808  353683 node_ready.go:38] duration metric: took 6m0.000151574s for node "ha-513251-m02" to be "Ready" ...
	I1227 09:53:05.763182  353683 out.go:203] 
	W1227 09:53:05.766205  353683 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1227 09:53:05.766232  353683 out.go:285] * 
	W1227 09:53:05.766486  353683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:53:05.771303  353683 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.704047446Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=df594719-7494-4e4b-8b96-ff6b50da7943 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.705214137Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=1db5f92f-e3de-4051-b1e8-f4a521df221b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.705367008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.714658246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.715313645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.733436178Z" level=info msg="Created container 4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=1db5f92f-e3de-4051-b1e8-f4a521df221b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.734077735Z" level=info msg="Starting container: 4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3" id=eeb97769-30ec-478a-bc87-4f69060f31cf name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.736017513Z" level=info msg="Started container" PID=1255 containerID=4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3 description=kube-system/kube-controller-manager-ha-513251/kube-controller-manager id=eeb97769-30ec-478a-bc87-4f69060f31cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=77c125920c3f982d94f3bc7831d664d32af1a76fa71da885b790a865c893eed1
	Dec 27 09:52:27 ha-513251 conmon[1253]: conmon 4694ec899710cc574db8 <ninfo>: container 1255 exited with status 1
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.770243941Z" level=info msg="Removing container: 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.777709971Z" level=info msg="Error loading conmon cgroup of container 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6: cgroup deleted" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.780784669Z" level=info msg="Removed container 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.701490281Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=079299f4-9d89-491a-8d17-2a3678443aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.702675032Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=8c967218-255a-4dbf-a2a1-3e466c02b6e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.703767818Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=0fd3892f-ad02-44ff-b1fb-2d96da8680c0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.70386432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.712337252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.712883974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.73007927Z" level=info msg="Created container 7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=0fd3892f-ad02-44ff-b1fb-2d96da8680c0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.730833829Z" level=info msg="Starting container: 7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908" id=0cf41e99-8376-4017-8d87-0efd593514d8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.740489445Z" level=info msg="Started container" PID=1272 containerID=7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908 description=kube-system/kube-apiserver-ha-513251/kube-apiserver id=0cf41e99-8376-4017-8d87-0efd593514d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a678fda46be5a152fa8932be97637587d68f62be01ebcbef8a2cc06dc92777be
	Dec 27 09:53:17 ha-513251 conmon[1269]: conmon 7e32d77299b93ef151c5 <ninfo>: container 1272 exited with status 255
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.881086782Z" level=info msg="Removing container: 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.888327431Z" level=info msg="Error loading conmon cgroup of container 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53: cgroup deleted" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.891415742Z" level=info msg="Removed container 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	7e32d77299b93       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   27 seconds ago       Exited              kube-apiserver            7                   a678fda46be5a       kube-apiserver-ha-513251            kube-system
	4694ec899710c       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   About a minute ago   Exited              kube-controller-manager   9                   77c125920c3f9       kube-controller-manager-ha-513251   kube-system
	3e2f79bfcc297       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   2 minutes ago        Running             etcd                      3                   0b4fdbfc50d52       etcd-ha-513251                      kube-system
	f69e010776644       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   7 minutes ago        Running             kube-scheduler            2                   8f6686604e637       kube-scheduler-ha-513251            kube-system
	f7e841ab1c87c       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   7 minutes ago        Exited              etcd                      2                   0b4fdbfc50d52       etcd-ha-513251                      kube-system
	cc9aea908d640       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   7 minutes ago        Running             kube-vip                  2                   9c394d0758080       kube-vip-ha-513251                  kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015479] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.516409] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034238] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.771451] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.481009] kauditd_printk_skb: 39 callbacks suppressed
	[Dec27 08:29] hrtimer: interrupt took 43410871 ns
	[Dec27 09:29] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 09:30] overlayfs: idmapped layers are currently not supported
	[  +0.068519] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[ +46.937326] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:42] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[  +3.379616] overlayfs: idmapped layers are currently not supported
	[ +26.881821] overlayfs: idmapped layers are currently not supported
	[Dec27 09:44] overlayfs: idmapped layers are currently not supported
	[Dec27 09:45] overlayfs: idmapped layers are currently not supported
	[  +3.382865] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3e2f79bfcc29755ed4c6ee91cec29fd05896c608e4d72883a5b019d5f8609903] <==
	{"level":"info","ts":"2025-12-27T09:53:20.061022Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:20.061060Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:20.061071Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-27T09:53:20.104905Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8e7fd81d8c1de671","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T09:53:20.104928Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8e7fd81d8c1de671","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T09:53:20.336055Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:20.836862Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:21.098388Z","caller":"etcdserver/server.go:1830","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-513251 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"warn","ts":"2025-12-27T09:53:21.337959Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-27T09:53:21.661416Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:21.661466Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:21.661487Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:21.661521Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:21.661532Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-27T09:53:21.838138Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:22.339276Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:22.840334Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-27T09:53:23.261603Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:23.261658Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:23.261681Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:23.261727Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:23.261746Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-27T09:53:23.341131Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:23.842273Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:24.343090Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	
	
	==> etcd [f7e841ab1c87c3a73fb0fa9774a7d5540fae4454f87f94803231876049f07db7] <==
	{"level":"info","ts":"2025-12-27T09:50:39.850450Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-27T09:50:39.850492Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"ha-513251","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-27T09:50:39.850587Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T09:50:39.852081Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T09:50:39.853588Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853628Z","caller":"etcdserver/server.go:1288","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-12-27T09:50:39.853662Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-27T09:50:39.853729Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-27T09:50:39.853751Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"info","ts":"2025-12-27T09:50:39.853765Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853767Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853782Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853783Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T09:50:39.853792Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853813Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853823Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853829Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853834Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853843Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"error","ts":"2025-12-27T09:50:39.853842Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853851Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"info","ts":"2025-12-27T09:50:39.860164Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-27T09:50:39.860448Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.860488Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-27T09:50:39.860499Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-513251","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:53:24 up  1:35,  0 user,  load average: 0.15, 0.78, 1.66
	Linux ha-513251 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908] <==
	I1227 09:52:56.791839       1 options.go:263] external host was not specified, using 192.168.49.2
	I1227 09:52:56.794835       1 server.go:150] Version: v1.35.0
	I1227 09:52:56.794953       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1227 09:52:57.278394       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:52:57.279880       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1227 09:52:57.280532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1227 09:52:57.284066       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:52:57.287400       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1227 09:52:57.287488       1 plugins.go:160] Loaded 14 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,NodeDeclaredFeatureValidator,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1227 09:52:57.287738       1 instance.go:240] Using reconciler: lease
	W1227 09:52:57.289397       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:53:17.278022       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:53:17.280084       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1227 09:53:17.288757       1 instance.go:233] Error creating leases: error creating storage factory: context deadline exceeded
	W1227 09:53:17.288844       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	
	
	==> kube-controller-manager [4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3] <==
	I1227 09:52:17.366708       1 serving.go:386] Generated self-signed cert in-memory
	I1227 09:52:17.376666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 09:52:17.376702       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:52:17.378190       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 09:52:17.378332       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 09:52:17.378381       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 09:52:17.378538       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 09:52:27.380746       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [f69e010776644f8005f4cd92f4774d5dc92d62b50dadf798020d9d8db93f52a7] <==
	E1227 09:49:23.577558       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:49:25.962683       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:49:27.718552       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:49:28.729793       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:49:30.037535       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:49:34.733607       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:49:34.929599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:49:35.092788       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:49:35.988125       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:49:38.452688       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:49:38.595135       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:49:44.386717       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:49:45.790610       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:49:49.151819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:50:03.444648       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 09:50:03.690487       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:50:03.834739       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:50:04.045150       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:50:07.144662       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:50:07.401271       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:50:07.608201       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:50:10.033692       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:50:13.073103       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:50:14.539398       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:50:15.853200       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	
	
	==> kubelet <==
	Dec 27 09:53:22 ha-513251 kubelet[805]: E1227 09:53:22.643880     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:22 ha-513251 kubelet[805]: E1227 09:53:22.703202     805 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-513251\" not found" node="ha-513251"
	Dec 27 09:53:22 ha-513251 kubelet[805]: E1227 09:53:22.703291     805 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-513251" containerName="kube-apiserver"
	Dec 27 09:53:22 ha-513251 kubelet[805]: I1227 09:53:22.703309     805 scope.go:122] "RemoveContainer" containerID="7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908"
	Dec 27 09:53:22 ha-513251 kubelet[805]: E1227 09:53:22.703447     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-513251_kube-system(0da9dd3b4fd74f83fca53d342cc4832b)\"" pod="kube-system/kube-apiserver-ha-513251" podUID="0da9dd3b4fd74f83fca53d342cc4832b"
	Dec 27 09:53:22 ha-513251 kubelet[805]: E1227 09:53:22.744810     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:22 ha-513251 kubelet[805]: E1227 09:53:22.845736     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:22 ha-513251 kubelet[805]: E1227 09:53:22.947211     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.048546     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.149929     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.250759     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.351750     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.404679     805 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8443/api/v1/namespaces/default/events/ha-513251.1885095d3532781e\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-513251.1885095d3532781e  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-513251,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-513251 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-513251,},FirstTimestamp:2025-12-27 09:45:24.741896222 +0000 UTC m=+0.205429555,LastTimestamp:2025-12-27 09:45:24.800402858 +0000 UTC m=+0.263936191,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-513251,}"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.453290     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.554327     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.655591     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.756467     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.857240     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:23 ha-513251 kubelet[805]: E1227 09:53:23.957895     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:24 ha-513251 kubelet[805]: E1227 09:53:24.059150     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:24 ha-513251 kubelet[805]: E1227 09:53:24.160549     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:24 ha-513251 kubelet[805]: E1227 09:53:24.261391     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:24 ha-513251 kubelet[805]: E1227 09:53:24.362302     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:24 ha-513251 kubelet[805]: E1227 09:53:24.463456     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:24 ha-513251 kubelet[805]: E1227 09:53:24.563934     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-513251 -n ha-513251
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-513251 -n ha-513251: exit status 2 (382.533845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "ha-513251" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (487.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-513251" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-513251\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-513251\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.35.0\",\"ClusterName\":\"ha-513251\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000,\"Rosetta\":false},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-513251
helpers_test.go:244: (dbg) docker inspect ha-513251:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13",
	        "Created": "2025-12-27T09:37:38.963263504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353813,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:45:18.061061871Z",
	            "FinishedAt": "2025-12-27T09:45:17.324877839Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/hosts",
	        "LogPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13-json.log",
	        "Name": "/ha-513251",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-513251:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-513251",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13",
	                "LowerDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-513251",
	                "Source": "/var/lib/docker/volumes/ha-513251/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-513251",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-513251",
	                "name.minikube.sigs.k8s.io": "ha-513251",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a36bf48a852f2142e03dad97328b97c989e14e43fba2676424d26ea683f38f8a",
	            "SandboxKey": "/var/run/docker/netns/a36bf48a852f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33198"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-513251": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:f9:a2:53:37:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b4d8553c414af9c151cf56182ba5e11cb773bee9162fafd694324331063b48e",
	                    "EndpointID": "076755f827ee23e4371e7e48c17c1b2920cab289dad51349a1a50ffb80554b20",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-513251",
	                        "bb5d0cc0ca44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-513251 -n ha-513251
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-513251 -n ha-513251: exit status 2 (330.772766ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 logs -n 25
helpers_test.go:261: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-513251 cp ha-513251-m03:/home/docker/cp-test.txt ha-513251-m04:/home/docker/cp-test_ha-513251-m03_ha-513251-m04.txt               │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test_ha-513251-m03_ha-513251-m04.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp testdata/cp-test.txt ha-513251-m04:/home/docker/cp-test.txt                                                             │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4265014863/001/cp-test_ha-513251-m04.txt │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251:/home/docker/cp-test_ha-513251-m04_ha-513251.txt                       │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251.txt                                                 │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251-m02:/home/docker/cp-test_ha-513251-m04_ha-513251-m02.txt               │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m02 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251-m02.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251-m03:/home/docker/cp-test_ha-513251-m04_ha-513251-m03.txt               │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m03 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251-m03.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ node    │ ha-513251 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ node    │ ha-513251 node start m02 --alsologtostderr -v 5                                                                                      │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │ 27 Dec 25 09:42 UTC │
	│ node    │ ha-513251 node list --alsologtostderr -v 5                                                                                           │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │                     │
	│ stop    │ ha-513251 stop --alsologtostderr -v 5                                                                                                │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │ 27 Dec 25 09:43 UTC │
	│ start   │ ha-513251 start --wait true --alsologtostderr -v 5                                                                                   │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:43 UTC │ 27 Dec 25 09:44 UTC │
	│ node    │ ha-513251 node list --alsologtostderr -v 5                                                                                           │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │                     │
	│ node    │ ha-513251 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │ 27 Dec 25 09:44 UTC │
	│ stop    │ ha-513251 stop --alsologtostderr -v 5                                                                                                │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │ 27 Dec 25 09:45 UTC │
	│ start   │ ha-513251 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:45 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:45:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:45:17.780858  353683 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:45:17.781066  353683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.781099  353683 out.go:374] Setting ErrFile to fd 2...
	I1227 09:45:17.781121  353683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.781427  353683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:45:17.781839  353683 out.go:368] Setting JSON to false
	I1227 09:45:17.782724  353683 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5271,"bootTime":1766823447,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:45:17.782828  353683 start.go:143] virtualization:  
	I1227 09:45:17.786847  353683 out.go:179] * [ha-513251] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:45:17.789790  353683 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:45:17.789897  353683 notify.go:221] Checking for updates...
	I1227 09:45:17.795846  353683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:45:17.798784  353683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:17.801736  353683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 09:45:17.804638  353683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:45:17.807626  353683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:45:17.811252  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:17.811891  353683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:45:17.840112  353683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:45:17.840288  353683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:45:17.900770  353683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:45:17.89071505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:45:17.900884  353683 docker.go:319] overlay module found
	I1227 09:45:17.905637  353683 out.go:179] * Using the docker driver based on existing profile
	I1227 09:45:17.908470  353683 start.go:309] selected driver: docker
	I1227 09:45:17.908492  353683 start.go:928] validating driver "docker" against &{Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:17.908638  353683 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:45:17.908737  353683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:45:17.967550  353683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:45:17.958343241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:45:17.968010  353683 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:45:17.968048  353683 cni.go:84] Creating CNI manager for ""
	I1227 09:45:17.968104  353683 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1227 09:45:17.968157  353683 start.go:353] cluster config:
	{Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:17.971557  353683 out.go:179] * Starting "ha-513251" primary control-plane node in "ha-513251" cluster
	I1227 09:45:17.974341  353683 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:45:17.977308  353683 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:45:17.980127  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:17.980181  353683 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:45:17.980196  353683 cache.go:65] Caching tarball of preloaded images
	I1227 09:45:17.980207  353683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:45:17.980281  353683 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:45:17.980293  353683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:45:17.980447  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:18.000295  353683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:45:18.000319  353683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:45:18.000341  353683 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:45:18.000375  353683 start.go:360] acquireMachinesLock for ha-513251: {Name:mka277024f8c2226ae51cd2727a8e25e47e84998 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:45:18.000447  353683 start.go:364] duration metric: took 46.926µs to acquireMachinesLock for "ha-513251"
	I1227 09:45:18.000468  353683 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:45:18.000475  353683 fix.go:54] fixHost starting: 
	I1227 09:45:18.000773  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:18.022293  353683 fix.go:112] recreateIfNeeded on ha-513251: state=Stopped err=<nil>
	W1227 09:45:18.022327  353683 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:45:18.025796  353683 out.go:252] * Restarting existing docker container for "ha-513251" ...
	I1227 09:45:18.025962  353683 cli_runner.go:164] Run: docker start ha-513251
	I1227 09:45:18.291407  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:18.313034  353683 kic.go:430] container "ha-513251" state is running.
	I1227 09:45:18.313680  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:18.336728  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:18.337162  353683 machine.go:94] provisionDockerMachine start ...
	I1227 09:45:18.337228  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:18.363888  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:18.364313  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:18.364324  353683 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:45:18.365396  353683 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:45:21.507722  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251
	
	I1227 09:45:21.507748  353683 ubuntu.go:182] provisioning hostname "ha-513251"
	I1227 09:45:21.507813  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.525335  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:21.525658  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:21.525674  353683 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-513251 && echo "ha-513251" | sudo tee /etc/hostname
	I1227 09:45:21.674143  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251
	
	I1227 09:45:21.674300  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.692486  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:21.692814  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:21.692838  353683 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513251/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:45:21.832635  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:45:21.832681  353683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:45:21.832704  353683 ubuntu.go:190] setting up certificates
	I1227 09:45:21.832713  353683 provision.go:84] configureAuth start
	I1227 09:45:21.832776  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:21.851553  353683 provision.go:143] copyHostCerts
	I1227 09:45:21.851617  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:21.851676  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 09:45:21.851690  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:21.851770  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:45:21.851873  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:21.851904  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 09:45:21.851923  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:21.851962  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:45:21.852092  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:21.852114  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 09:45:21.852123  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:21.852155  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:45:21.852214  353683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.ha-513251 san=[127.0.0.1 192.168.49.2 ha-513251 localhost minikube]
	I1227 09:45:21.903039  353683 provision.go:177] copyRemoteCerts
	I1227 09:45:21.903143  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:45:21.903193  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.920995  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.020706  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:45:22.020772  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1227 09:45:22.040457  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:45:22.040545  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:45:22.059426  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:45:22.059522  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:45:22.078437  353683 provision.go:87] duration metric: took 245.707104ms to configureAuth
	I1227 09:45:22.078487  353683 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:45:22.078740  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:22.078852  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.097273  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:22.097592  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:22.097611  353683 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:45:22.461249  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:45:22.461332  353683 machine.go:97] duration metric: took 4.124155515s to provisionDockerMachine
	I1227 09:45:22.461358  353683 start.go:293] postStartSetup for "ha-513251" (driver="docker")
	I1227 09:45:22.461396  353683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:45:22.461505  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:45:22.461577  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.484466  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.588039  353683 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:45:22.591353  353683 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:45:22.591383  353683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:45:22.591396  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:45:22.591453  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:45:22.591540  353683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 09:45:22.591553  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 09:45:22.591653  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:45:22.599440  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:22.617415  353683 start.go:296] duration metric: took 156.015491ms for postStartSetup
	I1227 09:45:22.617497  353683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:45:22.617543  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.635627  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.733536  353683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:45:22.738441  353683 fix.go:56] duration metric: took 4.73795966s for fixHost
	I1227 09:45:22.738473  353683 start.go:83] releasing machines lock for "ha-513251", held for 4.738016497s
	I1227 09:45:22.738547  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:22.756007  353683 ssh_runner.go:195] Run: cat /version.json
	I1227 09:45:22.756077  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.756356  353683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:45:22.756411  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.775684  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.784683  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.974776  353683 ssh_runner.go:195] Run: systemctl --version
	I1227 09:45:22.981407  353683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:45:23.019688  353683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:45:23.024397  353683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:45:23.024482  353683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:45:23.033023  353683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:45:23.033048  353683 start.go:496] detecting cgroup driver to use...
	I1227 09:45:23.033080  353683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:45:23.033128  353683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:45:23.048890  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:45:23.062391  353683 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:45:23.062461  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:45:23.078874  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:45:23.092641  353683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:45:23.215628  353683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:45:23.336773  353683 docker.go:234] disabling docker service ...
	I1227 09:45:23.336856  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:45:23.351993  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:45:23.365076  353683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:45:23.486999  353683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:45:23.607630  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:45:23.621666  353683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:45:23.637617  353683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:45:23.637733  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.646729  353683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:45:23.646803  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.656407  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.665374  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.674513  353683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:45:23.682899  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.692638  353683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.701500  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.710461  353683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:45:23.718222  353683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:45:23.726035  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:23.837128  353683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:45:24.007170  353683 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:45:24.007319  353683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:45:24.014123  353683 start.go:574] Will wait 60s for crictl version
	I1227 09:45:24.014245  353683 ssh_runner.go:195] Run: which crictl
	I1227 09:45:24.033366  353683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:45:24.058444  353683 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:45:24.058524  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:45:24.087072  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:45:24.118588  353683 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:45:24.121527  353683 cli_runner.go:164] Run: docker network inspect ha-513251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:45:24.138224  353683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:45:24.142467  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:45:24.152932  353683 kubeadm.go:884] updating cluster {Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:45:24.153087  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:24.153163  353683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:45:24.188918  353683 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:45:24.188945  353683 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:45:24.189006  353683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:45:24.216272  353683 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:45:24.216301  353683 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:45:24.216314  353683 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 09:45:24.216440  353683 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-513251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:45:24.216534  353683 ssh_runner.go:195] Run: crio config
	I1227 09:45:24.292083  353683 cni.go:84] Creating CNI manager for ""
	I1227 09:45:24.292105  353683 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1227 09:45:24.292144  353683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:45:24.292181  353683 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-513251 NodeName:ha-513251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:45:24.292330  353683 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-513251"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:45:24.292352  353683 kube-vip.go:115] generating kube-vip config ...
	I1227 09:45:24.292412  353683 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 09:45:24.304778  353683 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:45:24.304912  353683 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 09:45:24.305012  353683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:45:24.312901  353683 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:45:24.312976  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 09:45:24.320559  353683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 09:45:24.334537  353683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:45:24.347371  353683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 09:45:24.360123  353683 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 09:45:24.373098  353683 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 09:45:24.376820  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:45:24.387127  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:24.503934  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:45:24.522185  353683 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251 for IP: 192.168.49.2
	I1227 09:45:24.522204  353683 certs.go:195] generating shared ca certs ...
	I1227 09:45:24.522219  353683 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.522359  353683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:45:24.522410  353683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:45:24.522417  353683 certs.go:257] generating profile certs ...
	I1227 09:45:24.522498  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key
	I1227 09:45:24.522526  353683 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14
	I1227 09:45:24.522540  353683 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1227 09:45:24.644648  353683 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 ...
	I1227 09:45:24.648971  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14: {Name:mkb5dff6e9ccf7c0fd52113e0d144d6316de11fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.649217  353683 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14 ...
	I1227 09:45:24.649259  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14: {Name:mk0fad6909993d85239fadc763725d8b8b7a440c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.649401  353683 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt
	I1227 09:45:24.649572  353683 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key
	I1227 09:45:24.649765  353683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key
	I1227 09:45:24.649810  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:45:24.649846  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:45:24.649875  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:45:24.649918  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:45:24.649950  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:45:24.649988  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:45:24.650030  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:45:24.650060  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:45:24.650137  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 09:45:24.650200  353683 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 09:45:24.650235  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:45:24.650297  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:45:24.650344  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:45:24.650434  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:45:24.650545  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:24.650616  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 09:45:24.650660  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:24.650689  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.651244  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:45:24.675286  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:45:24.694694  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:45:24.717231  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:45:24.749389  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:45:24.770851  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:45:24.790309  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:45:24.811612  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:45:24.834366  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 09:45:24.853802  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:45:24.871797  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 09:45:24.894130  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:45:24.908139  353683 ssh_runner.go:195] Run: openssl version
	I1227 09:45:24.914716  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.922797  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 09:45:24.930729  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.934601  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.934686  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.976521  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:45:24.984298  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 09:45:24.991944  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 09:45:24.999664  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.020750  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.020853  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.066886  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:45:25.076628  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.086029  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:45:25.095338  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.101041  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.101118  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.145647  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:45:25.156431  353683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:45:25.165145  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:45:25.214664  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:45:25.265928  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:45:25.352085  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:45:25.431634  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:45:25.492845  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:45:25.554400  353683 kubeadm.go:401] StartCluster: {Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:25.554601  353683 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:45:25.554705  353683 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:45:25.597558  353683 cri.go:96] found id: "7b1da10d6de7d31911e815a0a6e65bec0b462f36adac4663bcba270a51072ce3"
	I1227 09:45:25.597630  353683 cri.go:96] found id: "f69e010776644f8005f4cd92f4774d5dc92d62b50dadf798020d9d8db93f52a7"
	I1227 09:45:25.597649  353683 cri.go:96] found id: "f7e841ab1c87c3a73fb0fa9774a7d5540fae4454f87f94803231876049f07db7"
	I1227 09:45:25.597672  353683 cri.go:96] found id: "c8b5eff27c4f32b2e2d3926915d5eef69dcc564f101afeb65284237bedc9de47"
	I1227 09:45:25.597710  353683 cri.go:96] found id: "cc9aea908d640c5405a83f2749f502470c2bdf01223971af7da3ebb2588fd6ab"
	I1227 09:45:25.597733  353683 cri.go:96] found id: ""
	I1227 09:45:25.597819  353683 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:45:25.609417  353683 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:45:25Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:45:25.609569  353683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:45:25.618182  353683 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:45:25.618252  353683 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:45:25.618336  353683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:45:25.632559  353683 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:45:25.633086  353683 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-513251" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:25.633265  353683 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "ha-513251" cluster setting kubeconfig missing "ha-513251" context setting]
	I1227 09:45:25.633617  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.634243  353683 kapi.go:59] client config for ha-513251: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 09:45:25.635070  353683 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 09:45:25.635170  353683 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 09:45:25.635191  353683 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 09:45:25.635109  353683 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 09:45:25.635305  353683 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 09:45:25.635338  353683 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 09:45:25.635362  353683 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 09:45:25.635701  353683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:45:25.649140  353683 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 09:45:25.649211  353683 kubeadm.go:602] duration metric: took 30.937903ms to restartPrimaryControlPlane
	I1227 09:45:25.649235  353683 kubeadm.go:403] duration metric: took 94.844629ms to StartCluster
	I1227 09:45:25.649264  353683 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.649374  353683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:25.650129  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.650407  353683 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:45:25.650466  353683 start.go:242] waiting for startup goroutines ...
	I1227 09:45:25.650497  353683 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:45:25.651321  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:25.656567  353683 out.go:179] * Enabled addons: 
	I1227 09:45:25.659428  353683 addons.go:530] duration metric: took 8.91449ms for enable addons: enabled=[]
	I1227 09:45:25.659506  353683 start.go:247] waiting for cluster config update ...
	I1227 09:45:25.659529  353683 start.go:256] writing updated cluster config ...
	I1227 09:45:25.662807  353683 out.go:203] 
	I1227 09:45:25.666068  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:25.666232  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:25.669730  353683 out.go:179] * Starting "ha-513251-m02" control-plane node in "ha-513251" cluster
	I1227 09:45:25.672614  353683 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:45:25.675545  353683 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:45:25.678485  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:25.678506  353683 cache.go:65] Caching tarball of preloaded images
	I1227 09:45:25.678618  353683 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:45:25.678630  353683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:45:25.678752  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:25.678961  353683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:45:25.700973  353683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:45:25.701000  353683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:45:25.701015  353683 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:45:25.701040  353683 start.go:360] acquireMachinesLock for ha-513251-m02: {Name:mk859480e290b8b366277aa9ac48e168657809ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:45:25.701095  353683 start.go:364] duration metric: took 35.808µs to acquireMachinesLock for "ha-513251-m02"
	I1227 09:45:25.701120  353683 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:45:25.701128  353683 fix.go:54] fixHost starting: m02
	I1227 09:45:25.701383  353683 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:25.721891  353683 fix.go:112] recreateIfNeeded on ha-513251-m02: state=Stopped err=<nil>
	W1227 09:45:25.721916  353683 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:45:25.725291  353683 out.go:252] * Restarting existing docker container for "ha-513251-m02" ...
	I1227 09:45:25.725375  353683 cli_runner.go:164] Run: docker start ha-513251-m02
	I1227 09:45:26.149022  353683 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:26.186961  353683 kic.go:430] container "ha-513251-m02" state is running.
	I1227 09:45:26.187328  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:26.217667  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:26.217913  353683 machine.go:94] provisionDockerMachine start ...
	I1227 09:45:26.217973  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:26.245157  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:26.245467  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:26.245482  353683 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:45:26.246067  353683 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55528->127.0.0.1:33203: read: connection reset by peer
	I1227 09:45:29.476637  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251-m02
	
	I1227 09:45:29.476662  353683 ubuntu.go:182] provisioning hostname "ha-513251-m02"
	I1227 09:45:29.476730  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:29.515584  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:29.515885  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:29.515896  353683 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-513251-m02 && echo "ha-513251-m02" | sudo tee /etc/hostname
	I1227 09:45:29.753613  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251-m02
	
	I1227 09:45:29.753763  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:29.802708  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:29.803015  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:29.803031  353683 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513251-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513251-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513251-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:45:30.026916  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:45:30.027002  353683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:45:30.027040  353683 ubuntu.go:190] setting up certificates
	I1227 09:45:30.027088  353683 provision.go:84] configureAuth start
	I1227 09:45:30.027213  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:30.061355  353683 provision.go:143] copyHostCerts
	I1227 09:45:30.061395  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:30.061429  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 09:45:30.061436  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:30.061516  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:45:30.061646  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:30.061664  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 09:45:30.061668  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:30.061698  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:45:30.061741  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:30.061761  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 09:45:30.061766  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:30.061789  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:45:30.061835  353683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.ha-513251-m02 san=[127.0.0.1 192.168.49.3 ha-513251-m02 localhost minikube]
	I1227 09:45:30.366138  353683 provision.go:177] copyRemoteCerts
	I1227 09:45:30.366258  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:45:30.366380  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:30.384700  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:30.494344  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:45:30.494406  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:45:30.530895  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:45:30.530955  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 09:45:30.561682  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:45:30.561747  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:45:30.591679  353683 provision.go:87] duration metric: took 564.557502ms to configureAuth
	I1227 09:45:30.591755  353683 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:45:30.592084  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:30.592246  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:30.621605  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:30.621922  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:30.621937  353683 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:45:31.635140  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:45:31.635164  353683 machine.go:97] duration metric: took 5.417238886s to provisionDockerMachine
	I1227 09:45:31.635176  353683 start.go:293] postStartSetup for "ha-513251-m02" (driver="docker")
	I1227 09:45:31.635186  353683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:45:31.635250  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:45:31.635298  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:31.672186  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:31.803466  353683 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:45:31.807580  353683 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:45:31.807606  353683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:45:31.807617  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:45:31.807677  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:45:31.807750  353683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 09:45:31.807757  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 09:45:31.807862  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:45:31.825236  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:31.845471  353683 start.go:296] duration metric: took 210.280443ms for postStartSetup
	I1227 09:45:31.845631  353683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:45:31.845704  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:31.863181  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:31.978613  353683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:45:31.988190  353683 fix.go:56] duration metric: took 6.287056138s for fixHost
	I1227 09:45:31.988218  353683 start.go:83] releasing machines lock for "ha-513251-m02", held for 6.287109349s
	I1227 09:45:31.988301  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:32.022351  353683 out.go:179] * Found network options:
	I1227 09:45:32.025233  353683 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 09:45:32.028060  353683 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 09:45:32.028113  353683 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 09:45:32.028186  353683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:45:32.028235  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:32.028260  353683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:45:32.028315  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:32.062562  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:32.071385  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:32.418806  353683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:45:32.560316  353683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:45:32.560399  353683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:45:32.576611  353683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:45:32.576635  353683 start.go:496] detecting cgroup driver to use...
	I1227 09:45:32.576667  353683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:45:32.576717  353683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:45:32.603470  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:45:32.627343  353683 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:45:32.627407  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:45:32.650889  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:45:32.671280  353683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:45:32.901177  353683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:45:33.083402  353683 docker.go:234] disabling docker service ...
	I1227 09:45:33.083516  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:45:33.102162  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:45:33.117631  353683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:45:33.330335  353683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:45:33.571932  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:45:33.588507  353683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:45:33.603417  353683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:45:33.603487  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.613092  353683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:45:33.613161  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.622600  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.632017  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.641471  353683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:45:33.650218  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.659580  353683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.675788  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.690916  353683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:45:33.699830  353683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:45:33.710022  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:33.856695  353683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:47:04.177050  353683 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320263495s)
	I1227 09:47:04.177079  353683 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:47:04.177137  353683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:47:04.181790  353683 start.go:574] Will wait 60s for crictl version
	I1227 09:47:04.181861  353683 ssh_runner.go:195] Run: which crictl
	I1227 09:47:04.185784  353683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:47:04.214501  353683 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:47:04.214588  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:47:04.244971  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:47:04.277197  353683 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:47:04.280209  353683 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 09:47:04.283165  353683 cli_runner.go:164] Run: docker network inspect ha-513251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:47:04.300447  353683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:47:04.304396  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:47:04.314914  353683 mustload.go:66] Loading cluster: ha-513251
	I1227 09:47:04.315173  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:47:04.315461  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:47:04.333467  353683 host.go:66] Checking if "ha-513251" exists ...
	I1227 09:47:04.333753  353683 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251 for IP: 192.168.49.3
	I1227 09:47:04.333767  353683 certs.go:195] generating shared ca certs ...
	I1227 09:47:04.333782  353683 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:47:04.333906  353683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:47:04.333952  353683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:47:04.333962  353683 certs.go:257] generating profile certs ...
	I1227 09:47:04.334040  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key
	I1227 09:47:04.334105  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.2d598068
	I1227 09:47:04.334153  353683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key
	I1227 09:47:04.334168  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:47:04.334198  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:47:04.334237  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:47:04.334248  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:47:04.334259  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:47:04.334275  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:47:04.334287  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:47:04.334306  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:47:04.334366  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 09:47:04.334408  353683 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 09:47:04.334421  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:47:04.334448  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:47:04.334582  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:47:04.334618  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:47:04.334672  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:47:04.334711  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.334729  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.334741  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 09:47:04.334806  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:47:04.352745  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:47:04.444298  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 09:47:04.448354  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 09:47:04.456699  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 09:47:04.460541  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 09:47:04.469121  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 09:47:04.472996  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 09:47:04.481446  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 09:47:04.484933  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1227 09:47:04.493259  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 09:47:04.497027  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 09:47:04.505596  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 09:47:04.509294  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 09:47:04.517713  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:47:04.537012  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:47:04.556494  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:47:04.576418  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:47:04.597182  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:47:04.618229  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:47:04.641696  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:47:04.663252  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:47:04.684934  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 09:47:04.716644  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:47:04.737307  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 09:47:04.758667  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 09:47:04.773792  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 09:47:04.788292  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 09:47:04.802374  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1227 09:47:04.817583  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 09:47:04.831128  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 09:47:04.845769  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 09:47:04.860041  353683 ssh_runner.go:195] Run: openssl version
	I1227 09:47:04.866442  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.874396  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 09:47:04.882193  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.886310  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.886373  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.928354  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:47:04.936052  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.943752  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:47:04.952048  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.956067  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.956176  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.997608  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:47:05.007408  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.017602  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 09:47:05.026017  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.030271  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.030427  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.074213  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:47:05.082090  353683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:47:05.086100  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:47:05.128461  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:47:05.172974  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:47:05.215663  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:47:05.263541  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:47:05.307445  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:47:05.354461  353683 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 09:47:05.354578  353683 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-513251-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:47:05.354622  353683 kube-vip.go:115] generating kube-vip config ...
	I1227 09:47:05.354681  353683 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 09:47:05.367621  353683 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:47:05.367701  353683 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 09:47:05.367789  353683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:47:05.376110  353683 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:47:05.376225  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 09:47:05.385227  353683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 09:47:05.399058  353683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:47:05.412225  353683 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 09:47:05.433740  353683 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 09:47:05.438137  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:47:05.449160  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:47:05.584548  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:47:05.598901  353683 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:47:05.599307  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:47:05.602962  353683 out.go:179] * Verifying Kubernetes components...
	I1227 09:47:05.605544  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:47:05.743183  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:47:05.759331  353683 kapi.go:59] client config for ha-513251: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 09:47:05.759399  353683 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 09:47:05.759628  353683 node_ready.go:35] waiting up to 6m0s for node "ha-513251-m02" to be "Ready" ...
	I1227 09:47:36.941690  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:47:36.942141  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1227 09:47:39.260337  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:41.261089  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:43.760342  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:45.760773  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1227 09:48:49.689213  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:48:49.689567  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:59702->192.168.49.2:8443: read: connection reset by peer
	W1227 09:48:51.761173  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:54.260275  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:56.260764  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:58.260950  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:00.261274  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:02.761180  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:05.261164  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:07.760850  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:09.761126  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:12.261097  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1227 09:50:17.401158  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:50:17.401610  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1227 09:50:19.760255  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:21.761012  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:23.761193  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:26.260515  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:28.760208  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:30.760293  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:33.261011  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:35.760559  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:38.260275  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:40.760183  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:42.761015  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:45.260386  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:47.760256  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:50.260156  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:52.261185  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:54.760529  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:56.760914  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:58.761079  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:01.260894  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:03.261105  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:05.760186  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:07.761034  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:09.761091  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:21.261754  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": net/http: TLS handshake timeout
	W1227 09:51:31.267176  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": net/http: TLS handshake timeout
	W1227 09:51:33.760222  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:35.760997  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:37.761041  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:40.260968  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:42.261084  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:44.761080  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:47.260390  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:49.760248  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:51.760405  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:54.260216  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:56.260474  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:58.261041  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:00.760814  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:03.260223  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:05.261042  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:07.760972  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:09.761080  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:12.261019  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:14.760290  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:17.260953  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:19.261221  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:21.760250  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:23.760454  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:25.760687  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:28.260326  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:30.260374  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:32.760183  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:34.761068  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:37.261218  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:39.760434  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:41.760931  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:44.260297  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:46.260721  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:48.261137  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:50.760243  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:53.260234  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:55.261149  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:53:05.759756  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": context deadline exceeded
	I1227 09:53:05.759808  353683 node_ready.go:38] duration metric: took 6m0.000151574s for node "ha-513251-m02" to be "Ready" ...
	I1227 09:53:05.763182  353683 out.go:203] 
	W1227 09:53:05.766205  353683 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1227 09:53:05.766232  353683 out.go:285] * 
	W1227 09:53:05.766486  353683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:53:05.771303  353683 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.704047446Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=df594719-7494-4e4b-8b96-ff6b50da7943 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.705214137Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=1db5f92f-e3de-4051-b1e8-f4a521df221b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.705367008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.714658246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.715313645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.733436178Z" level=info msg="Created container 4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=1db5f92f-e3de-4051-b1e8-f4a521df221b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.734077735Z" level=info msg="Starting container: 4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3" id=eeb97769-30ec-478a-bc87-4f69060f31cf name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.736017513Z" level=info msg="Started container" PID=1255 containerID=4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3 description=kube-system/kube-controller-manager-ha-513251/kube-controller-manager id=eeb97769-30ec-478a-bc87-4f69060f31cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=77c125920c3f982d94f3bc7831d664d32af1a76fa71da885b790a865c893eed1
	Dec 27 09:52:27 ha-513251 conmon[1253]: conmon 4694ec899710cc574db8 <ninfo>: container 1255 exited with status 1
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.770243941Z" level=info msg="Removing container: 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.777709971Z" level=info msg="Error loading conmon cgroup of container 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6: cgroup deleted" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.780784669Z" level=info msg="Removed container 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.701490281Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=079299f4-9d89-491a-8d17-2a3678443aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.702675032Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=8c967218-255a-4dbf-a2a1-3e466c02b6e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.703767818Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=0fd3892f-ad02-44ff-b1fb-2d96da8680c0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.70386432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.712337252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.712883974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.73007927Z" level=info msg="Created container 7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=0fd3892f-ad02-44ff-b1fb-2d96da8680c0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.730833829Z" level=info msg="Starting container: 7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908" id=0cf41e99-8376-4017-8d87-0efd593514d8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.740489445Z" level=info msg="Started container" PID=1272 containerID=7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908 description=kube-system/kube-apiserver-ha-513251/kube-apiserver id=0cf41e99-8376-4017-8d87-0efd593514d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a678fda46be5a152fa8932be97637587d68f62be01ebcbef8a2cc06dc92777be
	Dec 27 09:53:17 ha-513251 conmon[1269]: conmon 7e32d77299b93ef151c5 <ninfo>: container 1272 exited with status 255
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.881086782Z" level=info msg="Removing container: 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.888327431Z" level=info msg="Error loading conmon cgroup of container 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53: cgroup deleted" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.891415742Z" level=info msg="Removed container 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	7e32d77299b93       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   29 seconds ago       Exited              kube-apiserver            7                   a678fda46be5a       kube-apiserver-ha-513251            kube-system
	4694ec899710c       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   About a minute ago   Exited              kube-controller-manager   9                   77c125920c3f9       kube-controller-manager-ha-513251   kube-system
	3e2f79bfcc297       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   2 minutes ago        Running             etcd                      3                   0b4fdbfc50d52       etcd-ha-513251                      kube-system
	f69e010776644       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   8 minutes ago        Running             kube-scheduler            2                   8f6686604e637       kube-scheduler-ha-513251            kube-system
	f7e841ab1c87c       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   8 minutes ago        Exited              etcd                      2                   0b4fdbfc50d52       etcd-ha-513251                      kube-system
	cc9aea908d640       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   8 minutes ago        Running             kube-vip                  2                   9c394d0758080       kube-vip-ha-513251                  kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015479] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.516409] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034238] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.771451] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.481009] kauditd_printk_skb: 39 callbacks suppressed
	[Dec27 08:29] hrtimer: interrupt took 43410871 ns
	[Dec27 09:29] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 09:30] overlayfs: idmapped layers are currently not supported
	[  +0.068519] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[ +46.937326] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:42] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[  +3.379616] overlayfs: idmapped layers are currently not supported
	[ +26.881821] overlayfs: idmapped layers are currently not supported
	[Dec27 09:44] overlayfs: idmapped layers are currently not supported
	[Dec27 09:45] overlayfs: idmapped layers are currently not supported
	[  +3.382865] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3e2f79bfcc29755ed4c6ee91cec29fd05896c608e4d72883a5b019d5f8609903] <==
	{"level":"info","ts":"2025-12-27T09:53:23.261658Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:23.261681Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:23.261727Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:23.261746Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-27T09:53:23.341131Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:23.842273Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:24.343090Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:24.843240Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-27T09:53:24.861510Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:24.861557Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:24.861577Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:24.861606Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:24.861616Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-27T09:53:25.105493Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8e7fd81d8c1de671","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T09:53:25.105566Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8e7fd81d8c1de671","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T09:53:25.343930Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:25.834493Z","caller":"etcdserver/v3_server.go:923","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-12-27T09:53:25.834596Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.000333125s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-12-27T09:53:25.834623Z","caller":"traceutil/trace.go:172","msg":"trace[2092953664] range","detail":"{range_begin:; range_end:; }","duration":"7.000376874s","start":"2025-12-27T09:53:18.834234Z","end":"2025-12-27T09:53:25.834611Z","steps":["trace[2092953664] 'agreement among raft nodes before linearized reading'  (duration: 7.000330728s)"],"step_count":1}
	{"level":"error","ts":"2025-12-27T09:53:25.834682Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]non_learner ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2294\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2822\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3301\nnet/http.(*conn).serve\n\tnet/http/server.go:2102"}
	{"level":"info","ts":"2025-12-27T09:53:26.461387Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461441Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461464Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461493Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461504Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	
	
	==> etcd [f7e841ab1c87c3a73fb0fa9774a7d5540fae4454f87f94803231876049f07db7] <==
	{"level":"info","ts":"2025-12-27T09:50:39.850450Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-27T09:50:39.850492Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"ha-513251","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-27T09:50:39.850587Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T09:50:39.852081Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T09:50:39.853588Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853628Z","caller":"etcdserver/server.go:1288","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-12-27T09:50:39.853662Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-27T09:50:39.853729Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-27T09:50:39.853751Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"info","ts":"2025-12-27T09:50:39.853765Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853767Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853782Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853783Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T09:50:39.853792Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853813Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853823Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853829Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853834Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853843Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"error","ts":"2025-12-27T09:50:39.853842Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853851Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"info","ts":"2025-12-27T09:50:39.860164Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-27T09:50:39.860448Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.860488Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-27T09:50:39.860499Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-513251","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:53:26 up  1:35,  0 user,  load average: 0.30, 0.80, 1.66
	Linux ha-513251 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908] <==
	I1227 09:52:56.791839       1 options.go:263] external host was not specified, using 192.168.49.2
	I1227 09:52:56.794835       1 server.go:150] Version: v1.35.0
	I1227 09:52:56.794953       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1227 09:52:57.278394       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:52:57.279880       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1227 09:52:57.280532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1227 09:52:57.284066       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:52:57.287400       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1227 09:52:57.287488       1 plugins.go:160] Loaded 14 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,NodeDeclaredFeatureValidator,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1227 09:52:57.287738       1 instance.go:240] Using reconciler: lease
	W1227 09:52:57.289397       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:53:17.278022       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:53:17.280084       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1227 09:53:17.288757       1 instance.go:233] Error creating leases: error creating storage factory: context deadline exceeded
	W1227 09:53:17.288844       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	
	
	==> kube-controller-manager [4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3] <==
	I1227 09:52:17.366708       1 serving.go:386] Generated self-signed cert in-memory
	I1227 09:52:17.376666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 09:52:17.376702       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:52:17.378190       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 09:52:17.378332       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 09:52:17.378381       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 09:52:17.378538       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 09:52:27.380746       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [f69e010776644f8005f4cd92f4774d5dc92d62b50dadf798020d9d8db93f52a7] <==
	E1227 09:49:23.577558       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:49:25.962683       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:49:27.718552       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:49:28.729793       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:49:30.037535       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:49:34.733607       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:49:34.929599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:49:35.092788       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:49:35.988125       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:49:38.452688       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:49:38.595135       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:49:44.386717       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:49:45.790610       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:49:49.151819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:50:03.444648       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 09:50:03.690487       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:50:03.834739       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:50:04.045150       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:50:07.144662       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:50:07.401271       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:50:07.608201       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:50:10.033692       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:50:13.073103       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:50:14.539398       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:50:15.853200       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	
	
	==> kubelet <==
	Dec 27 09:53:25 ha-513251 kubelet[805]: E1227 09:53:25.069823     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:25 ha-513251 kubelet[805]: E1227 09:53:25.170366     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:25 ha-513251 kubelet[805]: E1227 09:53:25.270911     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:25 ha-513251 kubelet[805]: E1227 09:53:25.372344     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:25 ha-513251 kubelet[805]: E1227 09:53:25.473214     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:25 ha-513251 kubelet[805]: E1227 09:53:25.574787     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:25 ha-513251 kubelet[805]: E1227 09:53:25.675821     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:25 ha-513251 kubelet[805]: E1227 09:53:25.776812     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:25 ha-513251 kubelet[805]: E1227 09:53:25.878178     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:25 ha-513251 kubelet[805]: E1227 09:53:25.979423     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.006248     805 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-513251\" not found" node="ha-513251"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.006693     805 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-ha-513251" containerName="kube-apiserver"
	Dec 27 09:53:26 ha-513251 kubelet[805]: I1227 09:53:26.006907     805 scope.go:122] "RemoveContainer" containerID="7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.007251     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-513251_kube-system(0da9dd3b4fd74f83fca53d342cc4832b)\"" pod="kube-system/kube-apiserver-ha-513251" podUID="0da9dd3b4fd74f83fca53d342cc4832b"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.080581     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.182086     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.283078     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: I1227 09:53:26.299132     805 kubelet_node_status.go:74] "Attempting to register node" node="ha-513251"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.299522     805 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-513251"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.384422     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.485387     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.586078     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.687021     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.788090     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.888857     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-513251 -n ha-513251
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-513251 -n ha-513251: exit status 2 (324.215185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "ha-513251" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (2.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-513251 node add --control-plane --alsologtostderr -v 5: exit status 103 (398.956294ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-513251-m02 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-513251"

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:53:27.342988  357634 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:53:27.343184  357634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:53:27.343212  357634 out.go:374] Setting ErrFile to fd 2...
	I1227 09:53:27.343231  357634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:53:27.343647  357634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:53:27.344093  357634 mustload.go:66] Loading cluster: ha-513251
	I1227 09:53:27.344869  357634 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:53:27.345890  357634 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:53:27.363656  357634 host.go:66] Checking if "ha-513251" exists ...
	I1227 09:53:27.364225  357634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:53:27.420505  357634 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 09:53:27.411400626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:53:27.420888  357634 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:53:27.438965  357634 host.go:66] Checking if "ha-513251-m02" exists ...
	I1227 09:53:27.439262  357634 api_server.go:166] Checking apiserver status ...
	I1227 09:53:27.439320  357634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:53:27.439365  357634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:53:27.456758  357634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	W1227 09:53:27.558826  357634 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1227 09:53:27.558901  357634 out.go:285] ! The control-plane node ha-513251 apiserver is not running (will try others): (state=Stopped)
	! The control-plane node ha-513251 apiserver is not running (will try others): (state=Stopped)
	I1227 09:53:27.558910  357634 api_server.go:166] Checking apiserver status ...
	I1227 09:53:27.558979  357634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:53:27.559019  357634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:53:27.576360  357634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	W1227 09:53:27.681808  357634 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:53:27.685176  357634 out.go:179] * The control-plane node ha-513251-m02 apiserver is not running: (state=Stopped)
	I1227 09:53:27.688089  357634 out.go:179]   To start a cluster, run: "minikube start -p ha-513251"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-arm64 -p ha-513251 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-513251
helpers_test.go:244: (dbg) docker inspect ha-513251:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13",
	        "Created": "2025-12-27T09:37:38.963263504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353813,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:45:18.061061871Z",
	            "FinishedAt": "2025-12-27T09:45:17.324877839Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/hosts",
	        "LogPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13-json.log",
	        "Name": "/ha-513251",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-513251:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-513251",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13",
	                "LowerDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-513251",
	                "Source": "/var/lib/docker/volumes/ha-513251/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-513251",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-513251",
	                "name.minikube.sigs.k8s.io": "ha-513251",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a36bf48a852f2142e03dad97328b97c989e14e43fba2676424d26ea683f38f8a",
	            "SandboxKey": "/var/run/docker/netns/a36bf48a852f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33198"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-513251": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:f9:a2:53:37:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b4d8553c414af9c151cf56182ba5e11cb773bee9162fafd694324331063b48e",
	                    "EndpointID": "076755f827ee23e4371e7e48c17c1b2920cab289dad51349a1a50ffb80554b20",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-513251",
	                        "bb5d0cc0ca44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-513251 -n ha-513251
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-513251 -n ha-513251: exit status 2 (310.042763ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 logs -n 25
helpers_test.go:261: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-513251 ssh -n ha-513251-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test_ha-513251-m03_ha-513251-m04.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp testdata/cp-test.txt ha-513251-m04:/home/docker/cp-test.txt                                                             │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4265014863/001/cp-test_ha-513251-m04.txt │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251:/home/docker/cp-test_ha-513251-m04_ha-513251.txt                       │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251.txt                                                 │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251-m02:/home/docker/cp-test_ha-513251-m04_ha-513251-m02.txt               │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m02 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251-m02.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251-m03:/home/docker/cp-test_ha-513251-m04_ha-513251-m03.txt               │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m03 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251-m03.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ node    │ ha-513251 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ node    │ ha-513251 node start m02 --alsologtostderr -v 5                                                                                      │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │ 27 Dec 25 09:42 UTC │
	│ node    │ ha-513251 node list --alsologtostderr -v 5                                                                                           │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │                     │
	│ stop    │ ha-513251 stop --alsologtostderr -v 5                                                                                                │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │ 27 Dec 25 09:43 UTC │
	│ start   │ ha-513251 start --wait true --alsologtostderr -v 5                                                                                   │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:43 UTC │ 27 Dec 25 09:44 UTC │
	│ node    │ ha-513251 node list --alsologtostderr -v 5                                                                                           │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │                     │
	│ node    │ ha-513251 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │ 27 Dec 25 09:44 UTC │
	│ stop    │ ha-513251 stop --alsologtostderr -v 5                                                                                                │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │ 27 Dec 25 09:45 UTC │
	│ start   │ ha-513251 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:45 UTC │                     │
	│ node    │ ha-513251 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:45:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:45:17.780858  353683 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:45:17.781066  353683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.781099  353683 out.go:374] Setting ErrFile to fd 2...
	I1227 09:45:17.781121  353683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.781427  353683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:45:17.781839  353683 out.go:368] Setting JSON to false
	I1227 09:45:17.782724  353683 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5271,"bootTime":1766823447,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:45:17.782828  353683 start.go:143] virtualization:  
	I1227 09:45:17.786847  353683 out.go:179] * [ha-513251] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:45:17.789790  353683 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:45:17.789897  353683 notify.go:221] Checking for updates...
	I1227 09:45:17.795846  353683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:45:17.798784  353683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:17.801736  353683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 09:45:17.804638  353683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:45:17.807626  353683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:45:17.811252  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:17.811891  353683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:45:17.840112  353683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:45:17.840288  353683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:45:17.900770  353683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:45:17.89071505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:45:17.900884  353683 docker.go:319] overlay module found
	I1227 09:45:17.905637  353683 out.go:179] * Using the docker driver based on existing profile
	I1227 09:45:17.908470  353683 start.go:309] selected driver: docker
	I1227 09:45:17.908492  353683 start.go:928] validating driver "docker" against &{Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:17.908638  353683 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:45:17.908737  353683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:45:17.967550  353683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:45:17.958343241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:45:17.968010  353683 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:45:17.968048  353683 cni.go:84] Creating CNI manager for ""
	I1227 09:45:17.968104  353683 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1227 09:45:17.968157  353683 start.go:353] cluster config:
	{Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:17.971557  353683 out.go:179] * Starting "ha-513251" primary control-plane node in "ha-513251" cluster
	I1227 09:45:17.974341  353683 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:45:17.977308  353683 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:45:17.980127  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:17.980181  353683 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:45:17.980196  353683 cache.go:65] Caching tarball of preloaded images
	I1227 09:45:17.980207  353683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:45:17.980281  353683 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:45:17.980293  353683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:45:17.980447  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:18.000295  353683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:45:18.000319  353683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:45:18.000341  353683 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:45:18.000375  353683 start.go:360] acquireMachinesLock for ha-513251: {Name:mka277024f8c2226ae51cd2727a8e25e47e84998 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:45:18.000447  353683 start.go:364] duration metric: took 46.926µs to acquireMachinesLock for "ha-513251"
	I1227 09:45:18.000468  353683 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:45:18.000475  353683 fix.go:54] fixHost starting: 
	I1227 09:45:18.000773  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:18.022293  353683 fix.go:112] recreateIfNeeded on ha-513251: state=Stopped err=<nil>
	W1227 09:45:18.022327  353683 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:45:18.025796  353683 out.go:252] * Restarting existing docker container for "ha-513251" ...
	I1227 09:45:18.025962  353683 cli_runner.go:164] Run: docker start ha-513251
	I1227 09:45:18.291407  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:18.313034  353683 kic.go:430] container "ha-513251" state is running.
	I1227 09:45:18.313680  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:18.336728  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:18.337162  353683 machine.go:94] provisionDockerMachine start ...
	I1227 09:45:18.337228  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:18.363888  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:18.364313  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:18.364324  353683 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:45:18.365396  353683 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:45:21.507722  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251
	
	I1227 09:45:21.507748  353683 ubuntu.go:182] provisioning hostname "ha-513251"
	I1227 09:45:21.507813  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.525335  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:21.525658  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:21.525674  353683 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-513251 && echo "ha-513251" | sudo tee /etc/hostname
	I1227 09:45:21.674143  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251
	
	I1227 09:45:21.674300  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.692486  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:21.692814  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:21.692838  353683 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513251/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:45:21.832635  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:45:21.832681  353683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:45:21.832704  353683 ubuntu.go:190] setting up certificates
	I1227 09:45:21.832713  353683 provision.go:84] configureAuth start
	I1227 09:45:21.832776  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:21.851553  353683 provision.go:143] copyHostCerts
	I1227 09:45:21.851617  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:21.851676  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 09:45:21.851690  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:21.851770  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:45:21.851873  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:21.851904  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 09:45:21.851923  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:21.851962  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:45:21.852092  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:21.852114  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 09:45:21.852123  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:21.852155  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:45:21.852214  353683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.ha-513251 san=[127.0.0.1 192.168.49.2 ha-513251 localhost minikube]
	I1227 09:45:21.903039  353683 provision.go:177] copyRemoteCerts
	I1227 09:45:21.903143  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:45:21.903193  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.920995  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.020706  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:45:22.020772  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1227 09:45:22.040457  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:45:22.040545  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:45:22.059426  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:45:22.059522  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:45:22.078437  353683 provision.go:87] duration metric: took 245.707104ms to configureAuth
	I1227 09:45:22.078487  353683 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:45:22.078740  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:22.078852  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.097273  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:22.097592  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:22.097611  353683 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:45:22.461249  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:45:22.461332  353683 machine.go:97] duration metric: took 4.124155515s to provisionDockerMachine
	I1227 09:45:22.461358  353683 start.go:293] postStartSetup for "ha-513251" (driver="docker")
	I1227 09:45:22.461396  353683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:45:22.461505  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:45:22.461577  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.484466  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.588039  353683 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:45:22.591353  353683 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:45:22.591383  353683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:45:22.591396  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:45:22.591453  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:45:22.591540  353683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 09:45:22.591553  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 09:45:22.591653  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:45:22.599440  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:22.617415  353683 start.go:296] duration metric: took 156.015491ms for postStartSetup
	I1227 09:45:22.617497  353683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:45:22.617543  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.635627  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.733536  353683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:45:22.738441  353683 fix.go:56] duration metric: took 4.73795966s for fixHost
	I1227 09:45:22.738473  353683 start.go:83] releasing machines lock for "ha-513251", held for 4.738016497s
	I1227 09:45:22.738547  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:22.756007  353683 ssh_runner.go:195] Run: cat /version.json
	I1227 09:45:22.756077  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.756356  353683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:45:22.756411  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.775684  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.784683  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.974776  353683 ssh_runner.go:195] Run: systemctl --version
	I1227 09:45:22.981407  353683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:45:23.019688  353683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:45:23.024397  353683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:45:23.024482  353683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:45:23.033023  353683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:45:23.033048  353683 start.go:496] detecting cgroup driver to use...
	I1227 09:45:23.033080  353683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:45:23.033128  353683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:45:23.048890  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:45:23.062391  353683 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:45:23.062461  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:45:23.078874  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:45:23.092641  353683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:45:23.215628  353683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:45:23.336773  353683 docker.go:234] disabling docker service ...
	I1227 09:45:23.336856  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:45:23.351993  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:45:23.365076  353683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:45:23.486999  353683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:45:23.607630  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:45:23.621666  353683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:45:23.637617  353683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:45:23.637733  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.646729  353683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:45:23.646803  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.656407  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.665374  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.674513  353683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:45:23.682899  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.692638  353683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.701500  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.710461  353683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:45:23.718222  353683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:45:23.726035  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:23.837128  353683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:45:24.007170  353683 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:45:24.007319  353683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:45:24.014123  353683 start.go:574] Will wait 60s for crictl version
	I1227 09:45:24.014245  353683 ssh_runner.go:195] Run: which crictl
	I1227 09:45:24.033366  353683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:45:24.058444  353683 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:45:24.058524  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:45:24.087072  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:45:24.118588  353683 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:45:24.121527  353683 cli_runner.go:164] Run: docker network inspect ha-513251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:45:24.138224  353683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:45:24.142467  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:45:24.152932  353683 kubeadm.go:884] updating cluster {Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:45:24.153087  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:24.153163  353683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:45:24.188918  353683 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:45:24.188945  353683 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:45:24.189006  353683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:45:24.216272  353683 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:45:24.216301  353683 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:45:24.216314  353683 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 09:45:24.216440  353683 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-513251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:45:24.216534  353683 ssh_runner.go:195] Run: crio config
	I1227 09:45:24.292083  353683 cni.go:84] Creating CNI manager for ""
	I1227 09:45:24.292105  353683 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1227 09:45:24.292144  353683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:45:24.292181  353683 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-513251 NodeName:ha-513251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:45:24.292330  353683 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-513251"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:45:24.292352  353683 kube-vip.go:115] generating kube-vip config ...
	I1227 09:45:24.292412  353683 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 09:45:24.304778  353683 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:45:24.304912  353683 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 09:45:24.305012  353683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:45:24.312901  353683 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:45:24.312976  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 09:45:24.320559  353683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 09:45:24.334537  353683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:45:24.347371  353683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 09:45:24.360123  353683 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 09:45:24.373098  353683 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 09:45:24.376820  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:45:24.387127  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:24.503934  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:45:24.522185  353683 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251 for IP: 192.168.49.2
	I1227 09:45:24.522204  353683 certs.go:195] generating shared ca certs ...
	I1227 09:45:24.522219  353683 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.522359  353683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:45:24.522410  353683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:45:24.522417  353683 certs.go:257] generating profile certs ...
	I1227 09:45:24.522498  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key
	I1227 09:45:24.522526  353683 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14
	I1227 09:45:24.522540  353683 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1227 09:45:24.644648  353683 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 ...
	I1227 09:45:24.648971  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14: {Name:mkb5dff6e9ccf7c0fd52113e0d144d6316de11fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.649217  353683 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14 ...
	I1227 09:45:24.649259  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14: {Name:mk0fad6909993d85239fadc763725d8b8b7a440c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.649401  353683 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt
	I1227 09:45:24.649572  353683 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key
	I1227 09:45:24.649765  353683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key
	I1227 09:45:24.649810  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:45:24.649846  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:45:24.649875  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:45:24.649918  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:45:24.649950  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:45:24.649988  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:45:24.650030  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:45:24.650060  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:45:24.650137  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 09:45:24.650200  353683 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 09:45:24.650235  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:45:24.650297  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:45:24.650344  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:45:24.650434  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:45:24.650545  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:24.650616  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 09:45:24.650660  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:24.650689  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.651244  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:45:24.675286  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:45:24.694694  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:45:24.717231  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:45:24.749389  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:45:24.770851  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:45:24.790309  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:45:24.811612  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:45:24.834366  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 09:45:24.853802  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:45:24.871797  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 09:45:24.894130  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:45:24.908139  353683 ssh_runner.go:195] Run: openssl version
	I1227 09:45:24.914716  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.922797  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 09:45:24.930729  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.934601  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.934686  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.976521  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:45:24.984298  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 09:45:24.991944  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 09:45:24.999664  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.020750  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.020853  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.066886  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:45:25.076628  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.086029  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:45:25.095338  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.101041  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.101118  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.145647  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:45:25.156431  353683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:45:25.165145  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:45:25.214664  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:45:25.265928  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:45:25.352085  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:45:25.431634  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:45:25.492845  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:45:25.554400  353683 kubeadm.go:401] StartCluster: {Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:25.554601  353683 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:45:25.554705  353683 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:45:25.597558  353683 cri.go:96] found id: "7b1da10d6de7d31911e815a0a6e65bec0b462f36adac4663bcba270a51072ce3"
	I1227 09:45:25.597630  353683 cri.go:96] found id: "f69e010776644f8005f4cd92f4774d5dc92d62b50dadf798020d9d8db93f52a7"
	I1227 09:45:25.597649  353683 cri.go:96] found id: "f7e841ab1c87c3a73fb0fa9774a7d5540fae4454f87f94803231876049f07db7"
	I1227 09:45:25.597672  353683 cri.go:96] found id: "c8b5eff27c4f32b2e2d3926915d5eef69dcc564f101afeb65284237bedc9de47"
	I1227 09:45:25.597710  353683 cri.go:96] found id: "cc9aea908d640c5405a83f2749f502470c2bdf01223971af7da3ebb2588fd6ab"
	I1227 09:45:25.597733  353683 cri.go:96] found id: ""
	I1227 09:45:25.597819  353683 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:45:25.609417  353683 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:45:25Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:45:25.609569  353683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:45:25.618182  353683 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:45:25.618252  353683 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:45:25.618336  353683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:45:25.632559  353683 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:45:25.633086  353683 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-513251" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:25.633265  353683 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "ha-513251" cluster setting kubeconfig missing "ha-513251" context setting]
	I1227 09:45:25.633617  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.634243  353683 kapi.go:59] client config for ha-513251: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 09:45:25.635070  353683 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 09:45:25.635170  353683 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 09:45:25.635191  353683 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 09:45:25.635109  353683 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 09:45:25.635305  353683 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 09:45:25.635338  353683 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 09:45:25.635362  353683 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 09:45:25.635701  353683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:45:25.649140  353683 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 09:45:25.649211  353683 kubeadm.go:602] duration metric: took 30.937903ms to restartPrimaryControlPlane
	I1227 09:45:25.649235  353683 kubeadm.go:403] duration metric: took 94.844629ms to StartCluster
	I1227 09:45:25.649264  353683 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.649374  353683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:25.650129  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.650407  353683 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:45:25.650466  353683 start.go:242] waiting for startup goroutines ...
	I1227 09:45:25.650497  353683 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:45:25.651321  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:25.656567  353683 out.go:179] * Enabled addons: 
	I1227 09:45:25.659428  353683 addons.go:530] duration metric: took 8.91449ms for enable addons: enabled=[]
	I1227 09:45:25.659506  353683 start.go:247] waiting for cluster config update ...
	I1227 09:45:25.659529  353683 start.go:256] writing updated cluster config ...
	I1227 09:45:25.662807  353683 out.go:203] 
	I1227 09:45:25.666068  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:25.666232  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:25.669730  353683 out.go:179] * Starting "ha-513251-m02" control-plane node in "ha-513251" cluster
	I1227 09:45:25.672614  353683 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:45:25.675545  353683 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:45:25.678485  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:25.678506  353683 cache.go:65] Caching tarball of preloaded images
	I1227 09:45:25.678618  353683 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:45:25.678630  353683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:45:25.678752  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:25.678961  353683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:45:25.700973  353683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:45:25.701000  353683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:45:25.701015  353683 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:45:25.701040  353683 start.go:360] acquireMachinesLock for ha-513251-m02: {Name:mk859480e290b8b366277aa9ac48e168657809ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:45:25.701095  353683 start.go:364] duration metric: took 35.808µs to acquireMachinesLock for "ha-513251-m02"
	I1227 09:45:25.701120  353683 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:45:25.701128  353683 fix.go:54] fixHost starting: m02
	I1227 09:45:25.701383  353683 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:25.721891  353683 fix.go:112] recreateIfNeeded on ha-513251-m02: state=Stopped err=<nil>
	W1227 09:45:25.721916  353683 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:45:25.725291  353683 out.go:252] * Restarting existing docker container for "ha-513251-m02" ...
	I1227 09:45:25.725375  353683 cli_runner.go:164] Run: docker start ha-513251-m02
	I1227 09:45:26.149022  353683 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:26.186961  353683 kic.go:430] container "ha-513251-m02" state is running.
	I1227 09:45:26.187328  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:26.217667  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:26.217913  353683 machine.go:94] provisionDockerMachine start ...
	I1227 09:45:26.217973  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:26.245157  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:26.245467  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:26.245482  353683 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:45:26.246067  353683 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55528->127.0.0.1:33203: read: connection reset by peer
	I1227 09:45:29.476637  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251-m02
	
	I1227 09:45:29.476662  353683 ubuntu.go:182] provisioning hostname "ha-513251-m02"
	I1227 09:45:29.476730  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:29.515584  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:29.515885  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:29.515896  353683 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-513251-m02 && echo "ha-513251-m02" | sudo tee /etc/hostname
	I1227 09:45:29.753613  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251-m02
	
	I1227 09:45:29.753763  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:29.802708  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:29.803015  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:29.803031  353683 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513251-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513251-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513251-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:45:30.026916  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:45:30.027002  353683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:45:30.027040  353683 ubuntu.go:190] setting up certificates
	I1227 09:45:30.027088  353683 provision.go:84] configureAuth start
	I1227 09:45:30.027213  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:30.061355  353683 provision.go:143] copyHostCerts
	I1227 09:45:30.061395  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:30.061429  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 09:45:30.061436  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:30.061516  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:45:30.061646  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:30.061664  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 09:45:30.061668  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:30.061698  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:45:30.061741  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:30.061761  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 09:45:30.061766  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:30.061789  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:45:30.061835  353683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.ha-513251-m02 san=[127.0.0.1 192.168.49.3 ha-513251-m02 localhost minikube]
	I1227 09:45:30.366138  353683 provision.go:177] copyRemoteCerts
	I1227 09:45:30.366258  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:45:30.366380  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:30.384700  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:30.494344  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:45:30.494406  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:45:30.530895  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:45:30.530955  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 09:45:30.561682  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:45:30.561747  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:45:30.591679  353683 provision.go:87] duration metric: took 564.557502ms to configureAuth
	I1227 09:45:30.591755  353683 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:45:30.592084  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:30.592246  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:30.621605  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:30.621922  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:30.621937  353683 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:45:31.635140  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:45:31.635164  353683 machine.go:97] duration metric: took 5.417238886s to provisionDockerMachine
	I1227 09:45:31.635176  353683 start.go:293] postStartSetup for "ha-513251-m02" (driver="docker")
	I1227 09:45:31.635186  353683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:45:31.635250  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:45:31.635298  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:31.672186  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:31.803466  353683 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:45:31.807580  353683 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:45:31.807606  353683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:45:31.807617  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:45:31.807677  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:45:31.807750  353683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 09:45:31.807757  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 09:45:31.807862  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:45:31.825236  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:31.845471  353683 start.go:296] duration metric: took 210.280443ms for postStartSetup
	I1227 09:45:31.845631  353683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:45:31.845704  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:31.863181  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:31.978613  353683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:45:31.988190  353683 fix.go:56] duration metric: took 6.287056138s for fixHost
	I1227 09:45:31.988218  353683 start.go:83] releasing machines lock for "ha-513251-m02", held for 6.287109349s
	I1227 09:45:31.988301  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:32.022351  353683 out.go:179] * Found network options:
	I1227 09:45:32.025233  353683 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 09:45:32.028060  353683 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 09:45:32.028113  353683 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 09:45:32.028186  353683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:45:32.028235  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:32.028260  353683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:45:32.028315  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:32.062562  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:32.071385  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:32.418806  353683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:45:32.560316  353683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:45:32.560399  353683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:45:32.576611  353683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:45:32.576635  353683 start.go:496] detecting cgroup driver to use...
	I1227 09:45:32.576667  353683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:45:32.576717  353683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:45:32.603470  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:45:32.627343  353683 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:45:32.627407  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:45:32.650889  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:45:32.671280  353683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:45:32.901177  353683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:45:33.083402  353683 docker.go:234] disabling docker service ...
	I1227 09:45:33.083516  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:45:33.102162  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:45:33.117631  353683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:45:33.330335  353683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:45:33.571932  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:45:33.588507  353683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:45:33.603417  353683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:45:33.603487  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.613092  353683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:45:33.613161  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.622600  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.632017  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.641471  353683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:45:33.650218  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.659580  353683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.675788  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.690916  353683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:45:33.699830  353683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:45:33.710022  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:33.856695  353683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:47:04.177050  353683 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320263495s)
	I1227 09:47:04.177079  353683 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:47:04.177137  353683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:47:04.181790  353683 start.go:574] Will wait 60s for crictl version
	I1227 09:47:04.181861  353683 ssh_runner.go:195] Run: which crictl
	I1227 09:47:04.185784  353683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:47:04.214501  353683 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:47:04.214588  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:47:04.244971  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:47:04.277197  353683 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:47:04.280209  353683 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 09:47:04.283165  353683 cli_runner.go:164] Run: docker network inspect ha-513251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:47:04.300447  353683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:47:04.304396  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:47:04.314914  353683 mustload.go:66] Loading cluster: ha-513251
	I1227 09:47:04.315173  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:47:04.315461  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:47:04.333467  353683 host.go:66] Checking if "ha-513251" exists ...
	I1227 09:47:04.333753  353683 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251 for IP: 192.168.49.3
	I1227 09:47:04.333767  353683 certs.go:195] generating shared ca certs ...
	I1227 09:47:04.333782  353683 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:47:04.333906  353683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:47:04.333952  353683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:47:04.333962  353683 certs.go:257] generating profile certs ...
	I1227 09:47:04.334040  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key
	I1227 09:47:04.334105  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.2d598068
	I1227 09:47:04.334153  353683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key
	I1227 09:47:04.334168  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:47:04.334198  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:47:04.334237  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:47:04.334248  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:47:04.334259  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:47:04.334275  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:47:04.334287  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:47:04.334306  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:47:04.334366  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 09:47:04.334408  353683 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 09:47:04.334421  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:47:04.334448  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:47:04.334582  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:47:04.334618  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:47:04.334672  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:47:04.334711  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.334729  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.334741  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 09:47:04.334806  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:47:04.352745  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:47:04.444298  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 09:47:04.448354  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 09:47:04.456699  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 09:47:04.460541  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 09:47:04.469121  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 09:47:04.472996  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 09:47:04.481446  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 09:47:04.484933  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1227 09:47:04.493259  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 09:47:04.497027  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 09:47:04.505596  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 09:47:04.509294  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 09:47:04.517713  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:47:04.537012  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:47:04.556494  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:47:04.576418  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:47:04.597182  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:47:04.618229  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:47:04.641696  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:47:04.663252  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:47:04.684934  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 09:47:04.716644  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:47:04.737307  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 09:47:04.758667  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 09:47:04.773792  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 09:47:04.788292  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 09:47:04.802374  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1227 09:47:04.817583  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 09:47:04.831128  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 09:47:04.845769  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 09:47:04.860041  353683 ssh_runner.go:195] Run: openssl version
	I1227 09:47:04.866442  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.874396  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 09:47:04.882193  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.886310  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.886373  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.928354  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:47:04.936052  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.943752  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:47:04.952048  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.956067  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.956176  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.997608  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:47:05.007408  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.017602  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 09:47:05.026017  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.030271  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.030427  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.074213  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:47:05.082090  353683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:47:05.086100  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:47:05.128461  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:47:05.172974  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:47:05.215663  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:47:05.263541  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:47:05.307445  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:47:05.354461  353683 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 09:47:05.354578  353683 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-513251-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:47:05.354622  353683 kube-vip.go:115] generating kube-vip config ...
	I1227 09:47:05.354681  353683 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 09:47:05.367621  353683 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:47:05.367701  353683 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 09:47:05.367789  353683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:47:05.376110  353683 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:47:05.376225  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 09:47:05.385227  353683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 09:47:05.399058  353683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:47:05.412225  353683 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 09:47:05.433740  353683 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 09:47:05.438137  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:47:05.449160  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:47:05.584548  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:47:05.598901  353683 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:47:05.599307  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:47:05.602962  353683 out.go:179] * Verifying Kubernetes components...
	I1227 09:47:05.605544  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:47:05.743183  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:47:05.759331  353683 kapi.go:59] client config for ha-513251: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 09:47:05.759399  353683 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 09:47:05.759628  353683 node_ready.go:35] waiting up to 6m0s for node "ha-513251-m02" to be "Ready" ...
	I1227 09:47:36.941690  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:47:36.942141  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1227 09:47:39.260337  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:41.261089  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:43.760342  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:45.760773  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1227 09:48:49.689213  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:48:49.689567  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:59702->192.168.49.2:8443: read: connection reset by peer
	W1227 09:48:51.761173  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:54.260275  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:56.260764  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:58.260950  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:00.261274  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:02.761180  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:05.261164  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:07.760850  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:09.761126  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:12.261097  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1227 09:50:17.401158  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:50:17.401610  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1227 09:50:19.760255  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:21.761012  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:23.761193  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:26.260515  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:28.760208  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:30.760293  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:33.261011  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:35.760559  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:38.260275  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:40.760183  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:42.761015  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:45.260386  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:47.760256  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:50.260156  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:52.261185  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:54.760529  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:56.760914  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:58.761079  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:01.260894  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:03.261105  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:05.760186  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:07.761034  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:09.761091  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:21.261754  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": net/http: TLS handshake timeout
	W1227 09:51:31.267176  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": net/http: TLS handshake timeout
	W1227 09:51:33.760222  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:35.760997  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:37.761041  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:40.260968  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:42.261084  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:44.761080  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:47.260390  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:49.760248  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:51.760405  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:54.260216  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:56.260474  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:58.261041  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:00.760814  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:03.260223  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:05.261042  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:07.760972  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:09.761080  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:12.261019  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:14.760290  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:17.260953  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:19.261221  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:21.760250  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:23.760454  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:25.760687  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:28.260326  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:30.260374  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:32.760183  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:34.761068  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:37.261218  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:39.760434  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:41.760931  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:44.260297  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:46.260721  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:48.261137  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:50.760243  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:53.260234  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:55.261149  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:53:05.759756  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": context deadline exceeded
	I1227 09:53:05.759808  353683 node_ready.go:38] duration metric: took 6m0.000151574s for node "ha-513251-m02" to be "Ready" ...
	I1227 09:53:05.763182  353683 out.go:203] 
	W1227 09:53:05.766205  353683 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1227 09:53:05.766232  353683 out.go:285] * 
	W1227 09:53:05.766486  353683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:53:05.771303  353683 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.704047446Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=df594719-7494-4e4b-8b96-ff6b50da7943 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.705214137Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=1db5f92f-e3de-4051-b1e8-f4a521df221b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.705367008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.714658246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.715313645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.733436178Z" level=info msg="Created container 4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=1db5f92f-e3de-4051-b1e8-f4a521df221b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.734077735Z" level=info msg="Starting container: 4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3" id=eeb97769-30ec-478a-bc87-4f69060f31cf name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.736017513Z" level=info msg="Started container" PID=1255 containerID=4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3 description=kube-system/kube-controller-manager-ha-513251/kube-controller-manager id=eeb97769-30ec-478a-bc87-4f69060f31cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=77c125920c3f982d94f3bc7831d664d32af1a76fa71da885b790a865c893eed1
	Dec 27 09:52:27 ha-513251 conmon[1253]: conmon 4694ec899710cc574db8 <ninfo>: container 1255 exited with status 1
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.770243941Z" level=info msg="Removing container: 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.777709971Z" level=info msg="Error loading conmon cgroup of container 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6: cgroup deleted" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.780784669Z" level=info msg="Removed container 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.701490281Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=079299f4-9d89-491a-8d17-2a3678443aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.702675032Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=8c967218-255a-4dbf-a2a1-3e466c02b6e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.703767818Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=0fd3892f-ad02-44ff-b1fb-2d96da8680c0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.70386432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.712337252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.712883974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.73007927Z" level=info msg="Created container 7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=0fd3892f-ad02-44ff-b1fb-2d96da8680c0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.730833829Z" level=info msg="Starting container: 7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908" id=0cf41e99-8376-4017-8d87-0efd593514d8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.740489445Z" level=info msg="Started container" PID=1272 containerID=7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908 description=kube-system/kube-apiserver-ha-513251/kube-apiserver id=0cf41e99-8376-4017-8d87-0efd593514d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a678fda46be5a152fa8932be97637587d68f62be01ebcbef8a2cc06dc92777be
	Dec 27 09:53:17 ha-513251 conmon[1269]: conmon 7e32d77299b93ef151c5 <ninfo>: container 1272 exited with status 255
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.881086782Z" level=info msg="Removing container: 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.888327431Z" level=info msg="Error loading conmon cgroup of container 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53: cgroup deleted" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.891415742Z" level=info msg="Removed container 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	7e32d77299b93       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   31 seconds ago       Exited              kube-apiserver            7                   a678fda46be5a       kube-apiserver-ha-513251            kube-system
	4694ec899710c       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   About a minute ago   Exited              kube-controller-manager   9                   77c125920c3f9       kube-controller-manager-ha-513251   kube-system
	3e2f79bfcc297       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   2 minutes ago        Running             etcd                      3                   0b4fdbfc50d52       etcd-ha-513251                      kube-system
	f69e010776644       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   8 minutes ago        Running             kube-scheduler            2                   8f6686604e637       kube-scheduler-ha-513251            kube-system
	f7e841ab1c87c       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   8 minutes ago        Exited              etcd                      2                   0b4fdbfc50d52       etcd-ha-513251                      kube-system
	cc9aea908d640       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   8 minutes ago        Running             kube-vip                  2                   9c394d0758080       kube-vip-ha-513251                  kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015479] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.516409] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034238] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.771451] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.481009] kauditd_printk_skb: 39 callbacks suppressed
	[Dec27 08:29] hrtimer: interrupt took 43410871 ns
	[Dec27 09:29] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 09:30] overlayfs: idmapped layers are currently not supported
	[  +0.068519] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[ +46.937326] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:42] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[  +3.379616] overlayfs: idmapped layers are currently not supported
	[ +26.881821] overlayfs: idmapped layers are currently not supported
	[Dec27 09:44] overlayfs: idmapped layers are currently not supported
	[Dec27 09:45] overlayfs: idmapped layers are currently not supported
	[  +3.382865] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3e2f79bfcc29755ed4c6ee91cec29fd05896c608e4d72883a5b019d5f8609903] <==
	{"level":"warn","ts":"2025-12-27T09:53:24.343090Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:24.843240Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-27T09:53:24.861510Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:24.861557Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:24.861577Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:24.861606Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:24.861616Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-27T09:53:25.105493Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8e7fd81d8c1de671","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T09:53:25.105566Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8e7fd81d8c1de671","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T09:53:25.343930Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447398,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:25.834493Z","caller":"etcdserver/v3_server.go:923","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-12-27T09:53:25.834596Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.000333125s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-12-27T09:53:25.834623Z","caller":"traceutil/trace.go:172","msg":"trace[2092953664] range","detail":"{range_begin:; range_end:; }","duration":"7.000376874s","start":"2025-12-27T09:53:18.834234Z","end":"2025-12-27T09:53:25.834611Z","steps":["trace[2092953664] 'agreement among raft nodes before linearized reading'  (duration: 7.000330728s)"],"step_count":1}
	{"level":"error","ts":"2025-12-27T09:53:25.834682Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]non_learner ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2294\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2822\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3301\nnet/http.(*conn).serve\n\tnet/http/server.go:2102"}
	{"level":"info","ts":"2025-12-27T09:53:26.461387Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461441Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461464Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461493Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461504Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:53:28.061763Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:28.061887Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:28.061908Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:28.061939Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:28.061950Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-27T09:53:28.098785Z","caller":"etcdserver/server.go:1830","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-513251 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	
	
	==> etcd [f7e841ab1c87c3a73fb0fa9774a7d5540fae4454f87f94803231876049f07db7] <==
	{"level":"info","ts":"2025-12-27T09:50:39.850450Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-27T09:50:39.850492Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"ha-513251","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-27T09:50:39.850587Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T09:50:39.852081Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T09:50:39.853588Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853628Z","caller":"etcdserver/server.go:1288","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-12-27T09:50:39.853662Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-27T09:50:39.853729Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-27T09:50:39.853751Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"info","ts":"2025-12-27T09:50:39.853765Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853767Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853782Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853783Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T09:50:39.853792Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853813Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853823Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853829Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853834Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853843Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"error","ts":"2025-12-27T09:50:39.853842Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853851Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"info","ts":"2025-12-27T09:50:39.860164Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-27T09:50:39.860448Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.860488Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-27T09:50:39.860499Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-513251","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:53:28 up  1:36,  0 user,  load average: 0.30, 0.80, 1.66
	Linux ha-513251 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908] <==
	I1227 09:52:56.791839       1 options.go:263] external host was not specified, using 192.168.49.2
	I1227 09:52:56.794835       1 server.go:150] Version: v1.35.0
	I1227 09:52:56.794953       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1227 09:52:57.278394       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:52:57.279880       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1227 09:52:57.280532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1227 09:52:57.284066       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:52:57.287400       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1227 09:52:57.287488       1 plugins.go:160] Loaded 14 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,NodeDeclaredFeatureValidator,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1227 09:52:57.287738       1 instance.go:240] Using reconciler: lease
	W1227 09:52:57.289397       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:53:17.278022       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:53:17.280084       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1227 09:53:17.288757       1 instance.go:233] Error creating leases: error creating storage factory: context deadline exceeded
	W1227 09:53:17.288844       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	
	
	==> kube-controller-manager [4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3] <==
	I1227 09:52:17.366708       1 serving.go:386] Generated self-signed cert in-memory
	I1227 09:52:17.376666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 09:52:17.376702       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:52:17.378190       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 09:52:17.378332       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 09:52:17.378381       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 09:52:17.378538       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 09:52:27.380746       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [f69e010776644f8005f4cd92f4774d5dc92d62b50dadf798020d9d8db93f52a7] <==
	E1227 09:49:23.577558       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:49:25.962683       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:49:27.718552       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:49:28.729793       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:49:30.037535       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:49:34.733607       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:49:34.929599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:49:35.092788       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:49:35.988125       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:49:38.452688       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:49:38.595135       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:49:44.386717       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:49:45.790610       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:49:49.151819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:50:03.444648       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 09:50:03.690487       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:50:03.834739       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:50:04.045150       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:50:07.144662       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:50:07.401271       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:50:07.608201       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:50:10.033692       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:50:13.073103       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:50:14.539398       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:50:15.853200       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	
	
	==> kubelet <==
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.788090     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.888857     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:26 ha-513251 kubelet[805]: E1227 09:53:26.990249     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:27 ha-513251 kubelet[805]: E1227 09:53:27.091256     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:27 ha-513251 kubelet[805]: E1227 09:53:27.192646     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:27 ha-513251 kubelet[805]: E1227 09:53:27.285773     805 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-513251?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Dec 27 09:53:27 ha-513251 kubelet[805]: E1227 09:53:27.294159     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:27 ha-513251 kubelet[805]: E1227 09:53:27.395013     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:27 ha-513251 kubelet[805]: E1227 09:53:27.495850     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:27 ha-513251 kubelet[805]: E1227 09:53:27.596803     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:27 ha-513251 kubelet[805]: E1227 09:53:27.697685     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:27 ha-513251 kubelet[805]: E1227 09:53:27.798764     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:27 ha-513251 kubelet[805]: E1227 09:53:27.899624     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.001087     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.102018     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.203390     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.304389     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.405260     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.505988     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.606936     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.700974     805 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-513251\" not found" node="ha-513251"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.701065     805 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-ha-513251" containerName="etcd"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.708386     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.808940     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.909478     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-513251 -n ha-513251
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-513251 -n ha-513251: exit status 2 (326.150444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "ha-513251" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (2.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:305: expected profile "ha-513251" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-513251\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-513251\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.35.0\",\"ClusterName\":\"ha-513251\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",
\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false
,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SS
HAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000,\"Rosetta\":false},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-513251" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-513251\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-513251\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.35.0\",\"ClusterName\":\"ha-513251\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.35.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"re
gistry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Static
IP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000,\"Rosetta\":false},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-513251
helpers_test.go:244: (dbg) docker inspect ha-513251:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13",
	        "Created": "2025-12-27T09:37:38.963263504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353813,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:45:18.061061871Z",
	            "FinishedAt": "2025-12-27T09:45:17.324877839Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/hostname",
	        "HostsPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/hosts",
	        "LogPath": "/var/lib/docker/containers/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13-json.log",
	        "Name": "/ha-513251",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-513251:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-513251",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13",
	                "LowerDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/029bd90e651b8421f72d6d16dd18adc04535f447d42c210939d6a126c2033a6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-513251",
	                "Source": "/var/lib/docker/volumes/ha-513251/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-513251",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-513251",
	                "name.minikube.sigs.k8s.io": "ha-513251",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a36bf48a852f2142e03dad97328b97c989e14e43fba2676424d26ea683f38f8a",
	            "SandboxKey": "/var/run/docker/netns/a36bf48a852f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33198"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-513251": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:f9:a2:53:37:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b4d8553c414af9c151cf56182ba5e11cb773bee9162fafd694324331063b48e",
	                    "EndpointID": "076755f827ee23e4371e7e48c17c1b2920cab289dad51349a1a50ffb80554b20",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-513251",
	                        "bb5d0cc0ca44"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-513251 -n ha-513251
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-513251 -n ha-513251: exit status 2 (376.070725ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 logs -n 25
helpers_test.go:261: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-513251 ssh -n ha-513251-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test_ha-513251-m03_ha-513251-m04.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp testdata/cp-test.txt ha-513251-m04:/home/docker/cp-test.txt                                                             │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4265014863/001/cp-test_ha-513251-m04.txt │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251:/home/docker/cp-test_ha-513251-m04_ha-513251.txt                       │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251.txt                                                 │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251-m02:/home/docker/cp-test_ha-513251-m04_ha-513251-m02.txt               │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m02 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251-m02.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ cp      │ ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251-m03:/home/docker/cp-test_ha-513251-m04_ha-513251-m03.txt               │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ ssh     │ ha-513251 ssh -n ha-513251-m03 sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251-m03.txt                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ node    │ ha-513251 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:41 UTC │ 27 Dec 25 09:41 UTC │
	│ node    │ ha-513251 node start m02 --alsologtostderr -v 5                                                                                      │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │ 27 Dec 25 09:42 UTC │
	│ node    │ ha-513251 node list --alsologtostderr -v 5                                                                                           │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │                     │
	│ stop    │ ha-513251 stop --alsologtostderr -v 5                                                                                                │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:42 UTC │ 27 Dec 25 09:43 UTC │
	│ start   │ ha-513251 start --wait true --alsologtostderr -v 5                                                                                   │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:43 UTC │ 27 Dec 25 09:44 UTC │
	│ node    │ ha-513251 node list --alsologtostderr -v 5                                                                                           │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │                     │
	│ node    │ ha-513251 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │ 27 Dec 25 09:44 UTC │
	│ stop    │ ha-513251 stop --alsologtostderr -v 5                                                                                                │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:44 UTC │ 27 Dec 25 09:45 UTC │
	│ start   │ ha-513251 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:45 UTC │                     │
	│ node    │ ha-513251 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-513251 │ jenkins │ v1.37.0 │ 27 Dec 25 09:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:45:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:45:17.780858  353683 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:45:17.781066  353683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.781099  353683 out.go:374] Setting ErrFile to fd 2...
	I1227 09:45:17.781121  353683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.781427  353683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:45:17.781839  353683 out.go:368] Setting JSON to false
	I1227 09:45:17.782724  353683 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5271,"bootTime":1766823447,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:45:17.782828  353683 start.go:143] virtualization:  
	I1227 09:45:17.786847  353683 out.go:179] * [ha-513251] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:45:17.789790  353683 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:45:17.789897  353683 notify.go:221] Checking for updates...
	I1227 09:45:17.795846  353683 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:45:17.798784  353683 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:17.801736  353683 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 09:45:17.804638  353683 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:45:17.807626  353683 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:45:17.811252  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:17.811891  353683 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:45:17.840112  353683 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:45:17.840288  353683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:45:17.900770  353683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:45:17.89071505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:45:17.900884  353683 docker.go:319] overlay module found
	I1227 09:45:17.905637  353683 out.go:179] * Using the docker driver based on existing profile
	I1227 09:45:17.908470  353683 start.go:309] selected driver: docker
	I1227 09:45:17.908492  353683 start.go:928] validating driver "docker" against &{Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:17.908638  353683 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:45:17.908737  353683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:45:17.967550  353683 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:45:17.958343241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:45:17.968010  353683 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:45:17.968048  353683 cni.go:84] Creating CNI manager for ""
	I1227 09:45:17.968104  353683 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1227 09:45:17.968157  353683 start.go:353] cluster config:
	{Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:17.971557  353683 out.go:179] * Starting "ha-513251" primary control-plane node in "ha-513251" cluster
	I1227 09:45:17.974341  353683 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:45:17.977308  353683 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:45:17.980127  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:17.980181  353683 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:45:17.980196  353683 cache.go:65] Caching tarball of preloaded images
	I1227 09:45:17.980207  353683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:45:17.980281  353683 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:45:17.980293  353683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:45:17.980447  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:18.000295  353683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:45:18.000319  353683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:45:18.000341  353683 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:45:18.000375  353683 start.go:360] acquireMachinesLock for ha-513251: {Name:mka277024f8c2226ae51cd2727a8e25e47e84998 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:45:18.000447  353683 start.go:364] duration metric: took 46.926µs to acquireMachinesLock for "ha-513251"
	I1227 09:45:18.000468  353683 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:45:18.000475  353683 fix.go:54] fixHost starting: 
	I1227 09:45:18.000773  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:18.022293  353683 fix.go:112] recreateIfNeeded on ha-513251: state=Stopped err=<nil>
	W1227 09:45:18.022327  353683 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:45:18.025796  353683 out.go:252] * Restarting existing docker container for "ha-513251" ...
	I1227 09:45:18.025962  353683 cli_runner.go:164] Run: docker start ha-513251
	I1227 09:45:18.291407  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:18.313034  353683 kic.go:430] container "ha-513251" state is running.
	I1227 09:45:18.313680  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:18.336728  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:18.337162  353683 machine.go:94] provisionDockerMachine start ...
	I1227 09:45:18.337228  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:18.363888  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:18.364313  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:18.364324  353683 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:45:18.365396  353683 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:45:21.507722  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251
	
	I1227 09:45:21.507748  353683 ubuntu.go:182] provisioning hostname "ha-513251"
	I1227 09:45:21.507813  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.525335  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:21.525658  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:21.525674  353683 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-513251 && echo "ha-513251" | sudo tee /etc/hostname
	I1227 09:45:21.674143  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251
	
	I1227 09:45:21.674300  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.692486  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:21.692814  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:21.692838  353683 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513251/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:45:21.832635  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:45:21.832681  353683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:45:21.832704  353683 ubuntu.go:190] setting up certificates
	I1227 09:45:21.832713  353683 provision.go:84] configureAuth start
	I1227 09:45:21.832776  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:21.851553  353683 provision.go:143] copyHostCerts
	I1227 09:45:21.851617  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:21.851676  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 09:45:21.851690  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:21.851770  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:45:21.851873  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:21.851904  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 09:45:21.851923  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:21.851962  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:45:21.852092  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:21.852114  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 09:45:21.852123  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:21.852155  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:45:21.852214  353683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.ha-513251 san=[127.0.0.1 192.168.49.2 ha-513251 localhost minikube]
	I1227 09:45:21.903039  353683 provision.go:177] copyRemoteCerts
	I1227 09:45:21.903143  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:45:21.903193  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:21.920995  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.020706  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:45:22.020772  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1227 09:45:22.040457  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:45:22.040545  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:45:22.059426  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:45:22.059522  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:45:22.078437  353683 provision.go:87] duration metric: took 245.707104ms to configureAuth
	I1227 09:45:22.078487  353683 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:45:22.078740  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:22.078852  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.097273  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:22.097592  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I1227 09:45:22.097611  353683 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:45:22.461249  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:45:22.461332  353683 machine.go:97] duration metric: took 4.124155515s to provisionDockerMachine
	I1227 09:45:22.461358  353683 start.go:293] postStartSetup for "ha-513251" (driver="docker")
	I1227 09:45:22.461396  353683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:45:22.461505  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:45:22.461577  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.484466  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.588039  353683 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:45:22.591353  353683 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:45:22.591383  353683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:45:22.591396  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:45:22.591453  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:45:22.591540  353683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 09:45:22.591553  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 09:45:22.591653  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:45:22.599440  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:22.617415  353683 start.go:296] duration metric: took 156.015491ms for postStartSetup
	I1227 09:45:22.617497  353683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:45:22.617543  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.635627  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.733536  353683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:45:22.738441  353683 fix.go:56] duration metric: took 4.73795966s for fixHost
	I1227 09:45:22.738473  353683 start.go:83] releasing machines lock for "ha-513251", held for 4.738016497s
	I1227 09:45:22.738547  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:45:22.756007  353683 ssh_runner.go:195] Run: cat /version.json
	I1227 09:45:22.756077  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.756356  353683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:45:22.756411  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:45:22.775684  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.784683  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:45:22.974776  353683 ssh_runner.go:195] Run: systemctl --version
	I1227 09:45:22.981407  353683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:45:23.019688  353683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:45:23.024397  353683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:45:23.024482  353683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:45:23.033023  353683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:45:23.033048  353683 start.go:496] detecting cgroup driver to use...
	I1227 09:45:23.033080  353683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:45:23.033128  353683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:45:23.048890  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:45:23.062391  353683 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:45:23.062461  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:45:23.078874  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:45:23.092641  353683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:45:23.215628  353683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:45:23.336773  353683 docker.go:234] disabling docker service ...
	I1227 09:45:23.336856  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:45:23.351993  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:45:23.365076  353683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:45:23.486999  353683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:45:23.607630  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:45:23.621666  353683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:45:23.637617  353683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:45:23.637733  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.646729  353683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:45:23.646803  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.656407  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.665374  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.674513  353683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:45:23.682899  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.692638  353683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.701500  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:23.710461  353683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:45:23.718222  353683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:45:23.726035  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:23.837128  353683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:45:24.007170  353683 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:45:24.007319  353683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:45:24.014123  353683 start.go:574] Will wait 60s for crictl version
	I1227 09:45:24.014245  353683 ssh_runner.go:195] Run: which crictl
	I1227 09:45:24.033366  353683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:45:24.058444  353683 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:45:24.058524  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:45:24.087072  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:45:24.118588  353683 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:45:24.121527  353683 cli_runner.go:164] Run: docker network inspect ha-513251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:45:24.138224  353683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:45:24.142467  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:45:24.152932  353683 kubeadm.go:884] updating cluster {Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:45:24.153087  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:24.153163  353683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:45:24.188918  353683 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:45:24.188945  353683 crio.go:433] Images already preloaded, skipping extraction
	I1227 09:45:24.189006  353683 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:45:24.216272  353683 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 09:45:24.216301  353683 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:45:24.216314  353683 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I1227 09:45:24.216440  353683 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-513251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:45:24.216534  353683 ssh_runner.go:195] Run: crio config
	I1227 09:45:24.292083  353683 cni.go:84] Creating CNI manager for ""
	I1227 09:45:24.292105  353683 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1227 09:45:24.292144  353683 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:45:24.292181  353683 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-513251 NodeName:ha-513251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:45:24.292330  353683 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-513251"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:45:24.292352  353683 kube-vip.go:115] generating kube-vip config ...
	I1227 09:45:24.292412  353683 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 09:45:24.304778  353683 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:45:24.304912  353683 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 09:45:24.305012  353683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:45:24.312901  353683 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:45:24.312976  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1227 09:45:24.320559  353683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1227 09:45:24.334537  353683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:45:24.347371  353683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1227 09:45:24.360123  353683 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 09:45:24.373098  353683 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 09:45:24.376820  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:45:24.387127  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:24.503934  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:45:24.522185  353683 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251 for IP: 192.168.49.2
	I1227 09:45:24.522204  353683 certs.go:195] generating shared ca certs ...
	I1227 09:45:24.522219  353683 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.522359  353683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:45:24.522410  353683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:45:24.522417  353683 certs.go:257] generating profile certs ...
	I1227 09:45:24.522498  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key
	I1227 09:45:24.522526  353683 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14
	I1227 09:45:24.522540  353683 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1227 09:45:24.644648  353683 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 ...
	I1227 09:45:24.648971  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14: {Name:mkb5dff6e9ccf7c0fd52113e0d144d6316de11fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.649217  353683 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14 ...
	I1227 09:45:24.649259  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14: {Name:mk0fad6909993d85239fadc763725d8b8b7a440c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:24.649401  353683 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt.667f4b14 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt
	I1227 09:45:24.649572  353683 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.667f4b14 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key
	I1227 09:45:24.649765  353683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key
	I1227 09:45:24.649810  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:45:24.649846  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:45:24.649875  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:45:24.649918  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:45:24.649950  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:45:24.649988  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:45:24.650030  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:45:24.650060  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:45:24.650137  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 09:45:24.650200  353683 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 09:45:24.650235  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:45:24.650297  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:45:24.650344  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:45:24.650434  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:45:24.650545  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:24.650616  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 09:45:24.650660  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:24.650689  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.651244  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:45:24.675286  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:45:24.694694  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:45:24.717231  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:45:24.749389  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:45:24.770851  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:45:24.790309  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:45:24.811612  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:45:24.834366  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 09:45:24.853802  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:45:24.871797  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 09:45:24.894130  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:45:24.908139  353683 ssh_runner.go:195] Run: openssl version
	I1227 09:45:24.914716  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.922797  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 09:45:24.930729  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.934601  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.934686  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 09:45:24.976521  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:45:24.984298  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 09:45:24.991944  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 09:45:24.999664  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.020750  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.020853  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 09:45:25.066886  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:45:25.076628  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.086029  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:45:25.095338  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.101041  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.101118  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:45:25.145647  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:45:25.156431  353683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:45:25.165145  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:45:25.214664  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:45:25.265928  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:45:25.352085  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:45:25.431634  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:45:25.492845  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:45:25.554400  353683 kubeadm.go:401] StartCluster: {Name:ha-513251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:45:25.554601  353683 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 09:45:25.554705  353683 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:45:25.597558  353683 cri.go:96] found id: "7b1da10d6de7d31911e815a0a6e65bec0b462f36adac4663bcba270a51072ce3"
	I1227 09:45:25.597630  353683 cri.go:96] found id: "f69e010776644f8005f4cd92f4774d5dc92d62b50dadf798020d9d8db93f52a7"
	I1227 09:45:25.597649  353683 cri.go:96] found id: "f7e841ab1c87c3a73fb0fa9774a7d5540fae4454f87f94803231876049f07db7"
	I1227 09:45:25.597672  353683 cri.go:96] found id: "c8b5eff27c4f32b2e2d3926915d5eef69dcc564f101afeb65284237bedc9de47"
	I1227 09:45:25.597710  353683 cri.go:96] found id: "cc9aea908d640c5405a83f2749f502470c2bdf01223971af7da3ebb2588fd6ab"
	I1227 09:45:25.597733  353683 cri.go:96] found id: ""
	I1227 09:45:25.597819  353683 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 09:45:25.609417  353683 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:45:25Z" level=error msg="open /run/runc: no such file or directory"
	I1227 09:45:25.609569  353683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:45:25.618182  353683 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:45:25.618252  353683 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:45:25.618336  353683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:45:25.632559  353683 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:45:25.633086  353683 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-513251" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:25.633265  353683 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "ha-513251" cluster setting kubeconfig missing "ha-513251" context setting]
	I1227 09:45:25.633617  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.634243  353683 kapi.go:59] client config for ha-513251: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 09:45:25.635070  353683 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 09:45:25.635170  353683 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 09:45:25.635191  353683 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 09:45:25.635109  353683 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1227 09:45:25.635305  353683 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 09:45:25.635338  353683 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 09:45:25.635362  353683 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 09:45:25.635701  353683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:45:25.649140  353683 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1227 09:45:25.649211  353683 kubeadm.go:602] duration metric: took 30.937903ms to restartPrimaryControlPlane
	I1227 09:45:25.649235  353683 kubeadm.go:403] duration metric: took 94.844629ms to StartCluster
	I1227 09:45:25.649264  353683 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.649374  353683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:45:25.650129  353683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:45:25.650407  353683 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:45:25.650466  353683 start.go:242] waiting for startup goroutines ...
	I1227 09:45:25.650497  353683 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:45:25.651321  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:25.656567  353683 out.go:179] * Enabled addons: 
	I1227 09:45:25.659428  353683 addons.go:530] duration metric: took 8.91449ms for enable addons: enabled=[]
	I1227 09:45:25.659506  353683 start.go:247] waiting for cluster config update ...
	I1227 09:45:25.659529  353683 start.go:256] writing updated cluster config ...
	I1227 09:45:25.662807  353683 out.go:203] 
	I1227 09:45:25.666068  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:25.666232  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:25.669730  353683 out.go:179] * Starting "ha-513251-m02" control-plane node in "ha-513251" cluster
	I1227 09:45:25.672614  353683 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:45:25.675545  353683 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:45:25.678485  353683 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 09:45:25.678506  353683 cache.go:65] Caching tarball of preloaded images
	I1227 09:45:25.678618  353683 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 09:45:25.678630  353683 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 09:45:25.678752  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:25.678961  353683 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:45:25.700973  353683 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:45:25.701000  353683 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:45:25.701015  353683 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:45:25.701040  353683 start.go:360] acquireMachinesLock for ha-513251-m02: {Name:mk859480e290b8b366277aa9ac48e168657809ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:45:25.701095  353683 start.go:364] duration metric: took 35.808µs to acquireMachinesLock for "ha-513251-m02"
	I1227 09:45:25.701120  353683 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:45:25.701128  353683 fix.go:54] fixHost starting: m02
	I1227 09:45:25.701383  353683 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:25.721891  353683 fix.go:112] recreateIfNeeded on ha-513251-m02: state=Stopped err=<nil>
	W1227 09:45:25.721916  353683 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:45:25.725291  353683 out.go:252] * Restarting existing docker container for "ha-513251-m02" ...
	I1227 09:45:25.725375  353683 cli_runner.go:164] Run: docker start ha-513251-m02
	I1227 09:45:26.149022  353683 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:26.186961  353683 kic.go:430] container "ha-513251-m02" state is running.
	I1227 09:45:26.187328  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:26.217667  353683 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/config.json ...
	I1227 09:45:26.217913  353683 machine.go:94] provisionDockerMachine start ...
	I1227 09:45:26.217973  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:26.245157  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:26.245467  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:26.245482  353683 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:45:26.246067  353683 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55528->127.0.0.1:33203: read: connection reset by peer
	I1227 09:45:29.476637  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251-m02
	
	I1227 09:45:29.476662  353683 ubuntu.go:182] provisioning hostname "ha-513251-m02"
	I1227 09:45:29.476730  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:29.515584  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:29.515885  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:29.515896  353683 main.go:144] libmachine: About to run SSH command:
	sudo hostname ha-513251-m02 && echo "ha-513251-m02" | sudo tee /etc/hostname
	I1227 09:45:29.753613  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: ha-513251-m02
	
	I1227 09:45:29.753763  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:29.802708  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:29.803015  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:29.803031  353683 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-513251-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-513251-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-513251-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:45:30.026916  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:45:30.027002  353683 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 09:45:30.027040  353683 ubuntu.go:190] setting up certificates
	I1227 09:45:30.027088  353683 provision.go:84] configureAuth start
	I1227 09:45:30.027213  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:30.061355  353683 provision.go:143] copyHostCerts
	I1227 09:45:30.061395  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:30.061429  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 09:45:30.061436  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 09:45:30.061516  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 09:45:30.061646  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:30.061664  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 09:45:30.061668  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 09:45:30.061698  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 09:45:30.061741  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:30.061761  353683 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 09:45:30.061766  353683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 09:45:30.061789  353683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 09:45:30.061835  353683 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.ha-513251-m02 san=[127.0.0.1 192.168.49.3 ha-513251-m02 localhost minikube]
	I1227 09:45:30.366138  353683 provision.go:177] copyRemoteCerts
	I1227 09:45:30.366258  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:45:30.366380  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:30.384700  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:30.494344  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:45:30.494406  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 09:45:30.530895  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:45:30.530955  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1227 09:45:30.561682  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:45:30.561747  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:45:30.591679  353683 provision.go:87] duration metric: took 564.557502ms to configureAuth
	I1227 09:45:30.591755  353683 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:45:30.592084  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:30.592246  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:30.621605  353683 main.go:144] libmachine: Using SSH client type: native
	I1227 09:45:30.621922  353683 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I1227 09:45:30.621937  353683 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 09:45:31.635140  353683 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 09:45:31.635164  353683 machine.go:97] duration metric: took 5.417238886s to provisionDockerMachine
	I1227 09:45:31.635176  353683 start.go:293] postStartSetup for "ha-513251-m02" (driver="docker")
	I1227 09:45:31.635186  353683 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:45:31.635250  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:45:31.635298  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:31.672186  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:31.803466  353683 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:45:31.807580  353683 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:45:31.807606  353683 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:45:31.807617  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 09:45:31.807677  353683 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 09:45:31.807750  353683 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 09:45:31.807757  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /etc/ssl/certs/2998112.pem
	I1227 09:45:31.807862  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:45:31.825236  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:45:31.845471  353683 start.go:296] duration metric: took 210.280443ms for postStartSetup
	I1227 09:45:31.845631  353683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:45:31.845704  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:31.863181  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:31.978613  353683 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:45:31.988190  353683 fix.go:56] duration metric: took 6.287056138s for fixHost
	I1227 09:45:31.988218  353683 start.go:83] releasing machines lock for "ha-513251-m02", held for 6.287109349s
	I1227 09:45:31.988301  353683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m02
	I1227 09:45:32.022351  353683 out.go:179] * Found network options:
	I1227 09:45:32.025233  353683 out.go:179]   - NO_PROXY=192.168.49.2
	W1227 09:45:32.028060  353683 proxy.go:120] fail to check proxy env: Error ip not in block
	W1227 09:45:32.028113  353683 proxy.go:120] fail to check proxy env: Error ip not in block
	I1227 09:45:32.028186  353683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 09:45:32.028235  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:32.028260  353683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:45:32.028315  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m02
	I1227 09:45:32.062562  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:32.071385  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m02/id_rsa Username:docker}
	I1227 09:45:32.418806  353683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:45:32.560316  353683 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:45:32.560399  353683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:45:32.576611  353683 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:45:32.576635  353683 start.go:496] detecting cgroup driver to use...
	I1227 09:45:32.576667  353683 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:45:32.576717  353683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:45:32.603470  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:45:32.627343  353683 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:45:32.627407  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:45:32.650889  353683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:45:32.671280  353683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:45:32.901177  353683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:45:33.083402  353683 docker.go:234] disabling docker service ...
	I1227 09:45:33.083516  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:45:33.102162  353683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:45:33.117631  353683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:45:33.330335  353683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:45:33.571932  353683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:45:33.588507  353683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:45:33.603417  353683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 09:45:33.603487  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.613092  353683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 09:45:33.613161  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.622600  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.632017  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.641471  353683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:45:33.650218  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.659580  353683 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.675788  353683 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 09:45:33.690916  353683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:45:33.699830  353683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:45:33.710022  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:45:33.856695  353683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 09:47:04.177050  353683 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320263495s)
	I1227 09:47:04.177079  353683 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 09:47:04.177137  353683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 09:47:04.181790  353683 start.go:574] Will wait 60s for crictl version
	I1227 09:47:04.181861  353683 ssh_runner.go:195] Run: which crictl
	I1227 09:47:04.185784  353683 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:47:04.214501  353683 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 09:47:04.214588  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:47:04.244971  353683 ssh_runner.go:195] Run: crio --version
	I1227 09:47:04.277197  353683 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 09:47:04.280209  353683 out.go:179]   - env NO_PROXY=192.168.49.2
	I1227 09:47:04.283165  353683 cli_runner.go:164] Run: docker network inspect ha-513251 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:47:04.300447  353683 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1227 09:47:04.304396  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:47:04.314914  353683 mustload.go:66] Loading cluster: ha-513251
	I1227 09:47:04.315173  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:47:04.315461  353683 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:47:04.333467  353683 host.go:66] Checking if "ha-513251" exists ...
	I1227 09:47:04.333753  353683 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251 for IP: 192.168.49.3
	I1227 09:47:04.333767  353683 certs.go:195] generating shared ca certs ...
	I1227 09:47:04.333782  353683 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:47:04.333906  353683 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 09:47:04.333952  353683 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 09:47:04.333962  353683 certs.go:257] generating profile certs ...
	I1227 09:47:04.334040  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key
	I1227 09:47:04.334105  353683 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key.2d598068
	I1227 09:47:04.334153  353683 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key
	I1227 09:47:04.334168  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:47:04.334198  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:47:04.334237  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:47:04.334248  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:47:04.334259  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:47:04.334275  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:47:04.334287  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:47:04.334306  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:47:04.334366  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 09:47:04.334408  353683 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 09:47:04.334421  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:47:04.334448  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 09:47:04.334582  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:47:04.334618  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 09:47:04.334672  353683 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 09:47:04.334711  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.334729  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.334741  353683 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem -> /usr/share/ca-certificates/299811.pem
	I1227 09:47:04.334806  353683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:47:04.352745  353683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:47:04.444298  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1227 09:47:04.448354  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1227 09:47:04.456699  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1227 09:47:04.460541  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1227 09:47:04.469121  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1227 09:47:04.472996  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1227 09:47:04.481446  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1227 09:47:04.484933  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1227 09:47:04.493259  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1227 09:47:04.497027  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1227 09:47:04.505596  353683 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1227 09:47:04.509294  353683 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1227 09:47:04.517713  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:47:04.537012  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:47:04.556494  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:47:04.576418  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:47:04.597182  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 09:47:04.618229  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:47:04.641696  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:47:04.663252  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:47:04.684934  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 09:47:04.716644  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:47:04.737307  353683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 09:47:04.758667  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1227 09:47:04.773792  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1227 09:47:04.788292  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1227 09:47:04.802374  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1227 09:47:04.817583  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1227 09:47:04.831128  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1227 09:47:04.845769  353683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (728 bytes)
	I1227 09:47:04.860041  353683 ssh_runner.go:195] Run: openssl version
	I1227 09:47:04.866442  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.874396  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 09:47:04.882193  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.886310  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.886373  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 09:47:04.928354  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:47:04.936052  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.943752  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:47:04.952048  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.956067  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.956176  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:47:04.997608  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:47:05.007408  353683 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.017602  353683 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 09:47:05.026017  353683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.030271  353683 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.030427  353683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 09:47:05.074213  353683 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:47:05.082090  353683 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:47:05.086100  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:47:05.128461  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:47:05.172974  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:47:05.215663  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:47:05.263541  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:47:05.307445  353683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:47:05.354461  353683 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.35.0 crio true true} ...
	I1227 09:47:05.354578  353683 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-513251-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:ha-513251 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:47:05.354622  353683 kube-vip.go:115] generating kube-vip config ...
	I1227 09:47:05.354681  353683 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1227 09:47:05.367621  353683 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:47:05.367701  353683 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1227 09:47:05.367789  353683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:47:05.376110  353683 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:47:05.376225  353683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1227 09:47:05.385227  353683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1227 09:47:05.399058  353683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:47:05.412225  353683 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1227 09:47:05.433740  353683 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1227 09:47:05.438137  353683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:47:05.449160  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:47:05.584548  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:47:05.598901  353683 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 09:47:05.599307  353683 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:47:05.602962  353683 out.go:179] * Verifying Kubernetes components...
	I1227 09:47:05.605544  353683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:47:05.743183  353683 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:47:05.759331  353683 kapi.go:59] client config for ha-513251: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/ha-513251/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1227 09:47:05.759399  353683 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1227 09:47:05.759628  353683 node_ready.go:35] waiting up to 6m0s for node "ha-513251-m02" to be "Ready" ...
	I1227 09:47:36.941690  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:47:36.942141  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1227 09:47:39.260337  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:41.261089  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:43.760342  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:47:45.760773  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1227 09:48:49.689213  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:48:49.689567  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:59702->192.168.49.2:8443: read: connection reset by peer
	W1227 09:48:51.761173  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:54.260275  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:56.260764  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:48:58.260950  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:00.261274  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:02.761180  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:05.261164  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:07.760850  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:09.761126  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:49:12.261097  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1227 09:50:17.401158  353683 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02"
	W1227 09:50:17.401610  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1227 09:50:19.760255  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:21.761012  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:23.761193  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:26.260515  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:28.760208  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:30.760293  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:33.261011  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:35.760559  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:38.260275  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:40.760183  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:42.761015  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:45.260386  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:47.760256  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:50.260156  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:52.261185  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:54.760529  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:56.760914  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:50:58.761079  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:01.260894  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:03.261105  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:05.760186  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:07.761034  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:09.761091  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:21.261754  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": net/http: TLS handshake timeout
	W1227 09:51:31.267176  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": net/http: TLS handshake timeout
	W1227 09:51:33.760222  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:35.760997  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:37.761041  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:40.260968  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:42.261084  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:44.761080  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:47.260390  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:49.760248  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:51.760405  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:54.260216  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:56.260474  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:51:58.261041  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:00.760814  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:03.260223  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:05.261042  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:07.760972  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:09.761080  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:12.261019  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:14.760290  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:17.260953  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:19.261221  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:21.760250  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:23.760454  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:25.760687  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:28.260326  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:30.260374  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:32.760183  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:34.761068  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:37.261218  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:39.760434  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:41.760931  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:44.260297  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:46.260721  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:48.261137  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:50.760243  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:53.260234  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:52:55.261149  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1227 09:53:05.759756  353683 node_ready.go:55] error getting node "ha-513251-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-513251-m02": context deadline exceeded
	I1227 09:53:05.759808  353683 node_ready.go:38] duration metric: took 6m0.000151574s for node "ha-513251-m02" to be "Ready" ...
	I1227 09:53:05.763182  353683 out.go:203] 
	W1227 09:53:05.766205  353683 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1227 09:53:05.766232  353683 out.go:285] * 
	W1227 09:53:05.766486  353683 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:53:05.771303  353683 out.go:203] 
	
	
	==> CRI-O <==
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.704047446Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=df594719-7494-4e4b-8b96-ff6b50da7943 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.705214137Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=1db5f92f-e3de-4051-b1e8-f4a521df221b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.705367008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.714658246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.715313645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.733436178Z" level=info msg="Created container 4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=1db5f92f-e3de-4051-b1e8-f4a521df221b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.734077735Z" level=info msg="Starting container: 4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3" id=eeb97769-30ec-478a-bc87-4f69060f31cf name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:52:16 ha-513251 crio[669]: time="2025-12-27T09:52:16.736017513Z" level=info msg="Started container" PID=1255 containerID=4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3 description=kube-system/kube-controller-manager-ha-513251/kube-controller-manager id=eeb97769-30ec-478a-bc87-4f69060f31cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=77c125920c3f982d94f3bc7831d664d32af1a76fa71da885b790a865c893eed1
	Dec 27 09:52:27 ha-513251 conmon[1253]: conmon 4694ec899710cc574db8 <ninfo>: container 1255 exited with status 1
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.770243941Z" level=info msg="Removing container: 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.777709971Z" level=info msg="Error loading conmon cgroup of container 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6: cgroup deleted" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:27 ha-513251 crio[669]: time="2025-12-27T09:52:27.780784669Z" level=info msg="Removed container 2d96035cdd3ce31e663f85efbc2212452112dbdba91bb658842c231359c318e6: kube-system/kube-controller-manager-ha-513251/kube-controller-manager" id=da55eb3c-7976-48ba-a75f-a39739218412 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.701490281Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=079299f4-9d89-491a-8d17-2a3678443aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.702675032Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=8c967218-255a-4dbf-a2a1-3e466c02b6e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.703767818Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=0fd3892f-ad02-44ff-b1fb-2d96da8680c0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.70386432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.712337252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.712883974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.73007927Z" level=info msg="Created container 7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=0fd3892f-ad02-44ff-b1fb-2d96da8680c0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.730833829Z" level=info msg="Starting container: 7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908" id=0cf41e99-8376-4017-8d87-0efd593514d8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 09:52:56 ha-513251 crio[669]: time="2025-12-27T09:52:56.740489445Z" level=info msg="Started container" PID=1272 containerID=7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908 description=kube-system/kube-apiserver-ha-513251/kube-apiserver id=0cf41e99-8376-4017-8d87-0efd593514d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a678fda46be5a152fa8932be97637587d68f62be01ebcbef8a2cc06dc92777be
	Dec 27 09:53:17 ha-513251 conmon[1269]: conmon 7e32d77299b93ef151c5 <ninfo>: container 1272 exited with status 255
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.881086782Z" level=info msg="Removing container: 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.888327431Z" level=info msg="Error loading conmon cgroup of container 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53: cgroup deleted" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 09:53:17 ha-513251 crio[669]: time="2025-12-27T09:53:17.891415742Z" level=info msg="Removed container 1ec411df6464eb13f470690685876070ae1d07d5525d5abf026a035ab3f6cf53: kube-system/kube-apiserver-ha-513251/kube-apiserver" id=3eb0280a-0821-455b-a788-d923172551a2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	7e32d77299b93       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   34 seconds ago       Exited              kube-apiserver            7                   a678fda46be5a       kube-apiserver-ha-513251            kube-system
	4694ec899710c       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   About a minute ago   Exited              kube-controller-manager   9                   77c125920c3f9       kube-controller-manager-ha-513251   kube-system
	3e2f79bfcc297       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   2 minutes ago        Running             etcd                      3                   0b4fdbfc50d52       etcd-ha-513251                      kube-system
	f69e010776644       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   8 minutes ago        Running             kube-scheduler            2                   8f6686604e637       kube-scheduler-ha-513251            kube-system
	f7e841ab1c87c       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   8 minutes ago        Exited              etcd                      2                   0b4fdbfc50d52       etcd-ha-513251                      kube-system
	cc9aea908d640       28c5662932f6032ee4faba083d9c2af90232797e1d4f89d9892cb92b26fec299   8 minutes ago        Running             kube-vip                  2                   9c394d0758080       kube-vip-ha-513251                  kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015479] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.516409] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034238] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.771451] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.481009] kauditd_printk_skb: 39 callbacks suppressed
	[Dec27 08:29] hrtimer: interrupt took 43410871 ns
	[Dec27 09:29] kauditd_printk_skb: 8 callbacks suppressed
	[Dec27 09:30] overlayfs: idmapped layers are currently not supported
	[  +0.068519] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec27 09:34] overlayfs: idmapped layers are currently not supported
	[ +46.937326] overlayfs: idmapped layers are currently not supported
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:42] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[  +3.379616] overlayfs: idmapped layers are currently not supported
	[ +26.881821] overlayfs: idmapped layers are currently not supported
	[Dec27 09:44] overlayfs: idmapped layers are currently not supported
	[Dec27 09:45] overlayfs: idmapped layers are currently not supported
	[  +3.382865] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3e2f79bfcc29755ed4c6ee91cec29fd05896c608e4d72883a5b019d5f8609903] <==
	{"level":"warn","ts":"2025-12-27T09:53:25.834596Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.000333125s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-12-27T09:53:25.834623Z","caller":"traceutil/trace.go:172","msg":"trace[2092953664] range","detail":"{range_begin:; range_end:; }","duration":"7.000376874s","start":"2025-12-27T09:53:18.834234Z","end":"2025-12-27T09:53:25.834611Z","steps":["trace[2092953664] 'agreement among raft nodes before linearized reading'  (duration: 7.000330728s)"],"step_count":1}
	{"level":"error","ts":"2025-12-27T09:53:25.834682Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]non_learner ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2294\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2822\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3301\nnet/http.(*conn).serve\n\tnet/http/server.go:2102"}
	{"level":"info","ts":"2025-12-27T09:53:26.461387Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461441Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461464Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461493Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:26.461504Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T09:53:28.061763Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:28.061887Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:28.061908Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:28.061939Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:28.061950Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-27T09:53:28.098785Z","caller":"etcdserver/server.go:1830","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-513251 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"warn","ts":"2025-12-27T09:53:29.335165Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447400,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-27T09:53:29.660961Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:29.661010Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:29.661047Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2866] sent MsgPreVote request to 8e7fd81d8c1de671 at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:29.661076Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-12-27T09:53:29.661086Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-27T09:53:29.835276Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447400,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:30.106458Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8e7fd81d8c1de671","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T09:53:30.106478Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8e7fd81d8c1de671","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-27T09:53:30.335553Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447400,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-27T09:53:30.836149Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128042260754447400,"retry-timeout":"500ms"}
	
	
	==> etcd [f7e841ab1c87c3a73fb0fa9774a7d5540fae4454f87f94803231876049f07db7] <==
	{"level":"info","ts":"2025-12-27T09:50:39.850450Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-27T09:50:39.850492Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"ha-513251","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-27T09:50:39.850587Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T09:50:39.852081Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T09:50:39.853588Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853628Z","caller":"etcdserver/server.go:1288","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"info","ts":"2025-12-27T09:50:39.853662Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-27T09:50:39.853729Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-27T09:50:39.853751Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"info","ts":"2025-12-27T09:50:39.853765Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853767Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853782Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853783Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T09:50:39.853792Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853813Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853823Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853829Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"warn","ts":"2025-12-27T09:50:39.853834Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-27T09:50:39.853843Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"error","ts":"2025-12-27T09:50:39.853842Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.853851Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"8e7fd81d8c1de671"}
	{"level":"info","ts":"2025-12-27T09:50:39.860164Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-27T09:50:39.860448Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T09:50:39.860488Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-27T09:50:39.860499Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-513251","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:53:31 up  1:36,  0 user,  load average: 0.30, 0.80, 1.66
	Linux ha-513251 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [7e32d77299b93ef151c5217dafe3ce3478c4c71878af32039cb46a1c45e07908] <==
	I1227 09:52:56.791839       1 options.go:263] external host was not specified, using 192.168.49.2
	I1227 09:52:56.794835       1 server.go:150] Version: v1.35.0
	I1227 09:52:56.794953       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1227 09:52:57.278394       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:52:57.279880       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1227 09:52:57.280532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1227 09:52:57.284066       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 09:52:57.287400       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1227 09:52:57.287488       1 plugins.go:160] Loaded 14 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,NodeDeclaredFeatureValidator,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1227 09:52:57.287738       1 instance.go:240] Using reconciler: lease
	W1227 09:52:57.289397       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:53:17.278022       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1227 09:53:17.280084       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1227 09:53:17.288757       1 instance.go:233] Error creating leases: error creating storage factory: context deadline exceeded
	W1227 09:53:17.288844       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	
	
	==> kube-controller-manager [4694ec899710cc574db82e0ddd1b1b6e98a94664666cb5377ea1dbb9d5c5b2d3] <==
	I1227 09:52:17.366708       1 serving.go:386] Generated self-signed cert in-memory
	I1227 09:52:17.376666       1 controllermanager.go:189] "Starting" version="v1.35.0"
	I1227 09:52:17.376702       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 09:52:17.378190       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1227 09:52:17.378332       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1227 09:52:17.378381       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1227 09:52:17.378538       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 09:52:27.380746       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [f69e010776644f8005f4cd92f4774d5dc92d62b50dadf798020d9d8db93f52a7] <==
	E1227 09:49:23.577558       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 09:49:25.962683       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:49:27.718552       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 09:49:28.729793       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:49:30.037535       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:49:34.733607       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:49:34.929599       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:49:35.092788       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 09:49:35.988125       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 09:49:38.452688       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 09:49:38.595135       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 09:49:44.386717       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 09:49:45.790610       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 09:49:49.151819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 09:50:03.444648       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 09:50:03.690487       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 09:50:03.834739       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 09:50:04.045150       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 09:50:07.144662       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 09:50:07.401271       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 09:50:07.608201       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 09:50:10.033692       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 09:50:13.073103       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 09:50:14.539398       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 09:50:15.853200       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	
	
	==> kubelet <==
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.808940     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:28 ha-513251 kubelet[805]: E1227 09:53:28.909478     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:29 ha-513251 kubelet[805]: E1227 09:53:29.010659     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:29 ha-513251 kubelet[805]: E1227 09:53:29.111830     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:29 ha-513251 kubelet[805]: E1227 09:53:29.213061     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:29 ha-513251 kubelet[805]: E1227 09:53:29.314145     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:29 ha-513251 kubelet[805]: E1227 09:53:29.415234     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:29 ha-513251 kubelet[805]: E1227 09:53:29.516219     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:29 ha-513251 kubelet[805]: E1227 09:53:29.617290     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:29 ha-513251 kubelet[805]: E1227 09:53:29.718529     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:29 ha-513251 kubelet[805]: E1227 09:53:29.819658     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:29 ha-513251 kubelet[805]: E1227 09:53:29.921012     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:30 ha-513251 kubelet[805]: E1227 09:53:30.022292     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:30 ha-513251 kubelet[805]: E1227 09:53:30.123269     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:30 ha-513251 kubelet[805]: E1227 09:53:30.224473     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:30 ha-513251 kubelet[805]: E1227 09:53:30.325099     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:30 ha-513251 kubelet[805]: E1227 09:53:30.425810     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:30 ha-513251 kubelet[805]: E1227 09:53:30.526649     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:30 ha-513251 kubelet[805]: E1227 09:53:30.627928     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:30 ha-513251 kubelet[805]: E1227 09:53:30.728889     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:30 ha-513251 kubelet[805]: E1227 09:53:30.829336     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:30 ha-513251 kubelet[805]: E1227 09:53:30.930206     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:31 ha-513251 kubelet[805]: E1227 09:53:31.031227     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:31 ha-513251 kubelet[805]: E1227 09:53:31.132494     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 27 09:53:31 ha-513251 kubelet[805]: E1227 09:53:31.233686     805 reconstruct.go:188] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-513251\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-513251 -n ha-513251
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-513251 -n ha-513251: exit status 2 (329.82771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "ha-513251" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.24s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.42s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-287683 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-287683 --output=json --user=testUser: exit status 80 (2.416924216s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"37a61ec4-cad0-4b65-89d7-20717484206e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-287683 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"b1cfe606-e770-4b7c-a7e7-e8d3c2a235dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-27T09:54:28Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"4b3c51e1-9aa8-475a-bade-afd73fa8d6d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-287683 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-287683 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-287683 --output=json --user=testUser: exit status 80 (1.636140057s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e24705ba-15f4-48ff-90dd-917053a63dcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-287683 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e6896bc8-57d2-425b-94c2-5f90bf2fcaa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-27T09:54:30Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"9d243561-c6fc-4f0f-8a13-25071149f302","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-287683 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.64s)

                                                
                                    
x
+
TestPause/serial/Pause (9.07s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-708160 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-708160 --alsologtostderr -v=5: exit status 80 (2.75313872s)

                                                
                                                
-- stdout --
	* Pausing node pause-708160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:07:29.581743  426949 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:07:29.581943  426949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:29.581971  426949 out.go:374] Setting ErrFile to fd 2...
	I1227 10:07:29.581991  426949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:29.582304  426949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:07:29.582627  426949 out.go:368] Setting JSON to false
	I1227 10:07:29.582681  426949 mustload.go:66] Loading cluster: pause-708160
	I1227 10:07:29.583182  426949 config.go:182] Loaded profile config "pause-708160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:29.583711  426949 cli_runner.go:164] Run: docker container inspect pause-708160 --format={{.State.Status}}
	I1227 10:07:29.601753  426949 host.go:66] Checking if "pause-708160" exists ...
	I1227 10:07:29.602064  426949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:29.715561  426949 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:47 OomKillDisable:true NGoroutines:68 SystemTime:2025-12-27 10:07:29.704236597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:29.716277  426949 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-708160 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:07:29.727189  426949 out.go:179] * Pausing node pause-708160 ... 
	I1227 10:07:29.732173  426949 host.go:66] Checking if "pause-708160" exists ...
	I1227 10:07:29.733223  426949 ssh_runner.go:195] Run: systemctl --version
	I1227 10:07:29.733269  426949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-708160
	I1227 10:07:29.762710  426949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/pause-708160/id_rsa Username:docker}
	I1227 10:07:29.883093  426949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:29.897941  426949 pause.go:52] kubelet running: true
	I1227 10:07:29.898022  426949 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:07:30.227649  426949 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:07:30.227772  426949 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:07:30.308550  426949 cri.go:96] found id: "d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc"
	I1227 10:07:30.308583  426949 cri.go:96] found id: "10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9"
	I1227 10:07:30.308589  426949 cri.go:96] found id: "a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c"
	I1227 10:07:30.308593  426949 cri.go:96] found id: "2c64d8e381df69d1958523f5822496beb2ba43eae987d8fae9b64ce57573225f"
	I1227 10:07:30.308597  426949 cri.go:96] found id: "8accf0af299bbeb42eaba7c41dfc1e952332c74448d6dab63424c3da6b9345c1"
	I1227 10:07:30.308608  426949 cri.go:96] found id: "83b21918cd3f4a0784408c901996870c341dd01f50c25e2bfd73792516ccd48b"
	I1227 10:07:30.308611  426949 cri.go:96] found id: "c1a19553104260faaaa5aa331a7ff93ae7f15092486abe8d8ca3f4b56ad77590"
	I1227 10:07:30.308616  426949 cri.go:96] found id: "3079e2ab8d34d01f38d7ae4115c0bb716f4d774566cb6851ad4a865b5d8c3196"
	I1227 10:07:30.308620  426949 cri.go:96] found id: "c7965a3e48bb8bf18d75a8539e56bfc922406ccd352f4b28d84d0a546f4e6c36"
	I1227 10:07:30.308631  426949 cri.go:96] found id: "28bffddc84703204ed115c0dffebcc0bca180c4c588838da7fc20688bc1238ff"
	I1227 10:07:30.308637  426949 cri.go:96] found id: "4a50eddc661a4f82f4965c2ad250ef56be38dd8cb5ec0a70c61f8a632169fcb8"
	I1227 10:07:30.308640  426949 cri.go:96] found id: "d0b0eb38c19eab92cacf23e4694451181e8c28243a242a10d326b5b858be4470"
	I1227 10:07:30.308649  426949 cri.go:96] found id: "4ce51511884834f1f4f55745c7878be41ec49c18945051c3c04b7b118b00b869"
	I1227 10:07:30.308652  426949 cri.go:96] found id: "c85dfce27860abf2d02145e43c6d41b03c70e5c2b6e5bb8cf32868ab5d5a377f"
	I1227 10:07:30.308656  426949 cri.go:96] found id: ""
	I1227 10:07:30.308708  426949 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:07:30.321317  426949 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:30Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:07:30.578842  426949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:30.592560  426949 pause.go:52] kubelet running: false
	I1227 10:07:30.592649  426949 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:07:30.745763  426949 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:07:30.745852  426949 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:07:30.815079  426949 cri.go:96] found id: "d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc"
	I1227 10:07:30.815152  426949 cri.go:96] found id: "10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9"
	I1227 10:07:30.815180  426949 cri.go:96] found id: "a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c"
	I1227 10:07:30.815198  426949 cri.go:96] found id: "2c64d8e381df69d1958523f5822496beb2ba43eae987d8fae9b64ce57573225f"
	I1227 10:07:30.815233  426949 cri.go:96] found id: "8accf0af299bbeb42eaba7c41dfc1e952332c74448d6dab63424c3da6b9345c1"
	I1227 10:07:30.815257  426949 cri.go:96] found id: "83b21918cd3f4a0784408c901996870c341dd01f50c25e2bfd73792516ccd48b"
	I1227 10:07:30.815279  426949 cri.go:96] found id: "c1a19553104260faaaa5aa331a7ff93ae7f15092486abe8d8ca3f4b56ad77590"
	I1227 10:07:30.815314  426949 cri.go:96] found id: "3079e2ab8d34d01f38d7ae4115c0bb716f4d774566cb6851ad4a865b5d8c3196"
	I1227 10:07:30.815339  426949 cri.go:96] found id: "c7965a3e48bb8bf18d75a8539e56bfc922406ccd352f4b28d84d0a546f4e6c36"
	I1227 10:07:30.815363  426949 cri.go:96] found id: "28bffddc84703204ed115c0dffebcc0bca180c4c588838da7fc20688bc1238ff"
	I1227 10:07:30.815394  426949 cri.go:96] found id: "4a50eddc661a4f82f4965c2ad250ef56be38dd8cb5ec0a70c61f8a632169fcb8"
	I1227 10:07:30.815414  426949 cri.go:96] found id: "d0b0eb38c19eab92cacf23e4694451181e8c28243a242a10d326b5b858be4470"
	I1227 10:07:30.815432  426949 cri.go:96] found id: "4ce51511884834f1f4f55745c7878be41ec49c18945051c3c04b7b118b00b869"
	I1227 10:07:30.815452  426949 cri.go:96] found id: "c85dfce27860abf2d02145e43c6d41b03c70e5c2b6e5bb8cf32868ab5d5a377f"
	I1227 10:07:30.815483  426949 cri.go:96] found id: ""
	I1227 10:07:30.815571  426949 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:07:31.051431  426949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:31.065352  426949 pause.go:52] kubelet running: false
	I1227 10:07:31.065446  426949 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:07:31.209962  426949 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:07:31.210043  426949 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:07:31.277087  426949 cri.go:96] found id: "d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc"
	I1227 10:07:31.277159  426949 cri.go:96] found id: "10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9"
	I1227 10:07:31.277171  426949 cri.go:96] found id: "a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c"
	I1227 10:07:31.277203  426949 cri.go:96] found id: "2c64d8e381df69d1958523f5822496beb2ba43eae987d8fae9b64ce57573225f"
	I1227 10:07:31.277212  426949 cri.go:96] found id: "8accf0af299bbeb42eaba7c41dfc1e952332c74448d6dab63424c3da6b9345c1"
	I1227 10:07:31.277217  426949 cri.go:96] found id: "83b21918cd3f4a0784408c901996870c341dd01f50c25e2bfd73792516ccd48b"
	I1227 10:07:31.277220  426949 cri.go:96] found id: "c1a19553104260faaaa5aa331a7ff93ae7f15092486abe8d8ca3f4b56ad77590"
	I1227 10:07:31.277223  426949 cri.go:96] found id: "3079e2ab8d34d01f38d7ae4115c0bb716f4d774566cb6851ad4a865b5d8c3196"
	I1227 10:07:31.277233  426949 cri.go:96] found id: "c7965a3e48bb8bf18d75a8539e56bfc922406ccd352f4b28d84d0a546f4e6c36"
	I1227 10:07:31.277242  426949 cri.go:96] found id: "28bffddc84703204ed115c0dffebcc0bca180c4c588838da7fc20688bc1238ff"
	I1227 10:07:31.277246  426949 cri.go:96] found id: "4a50eddc661a4f82f4965c2ad250ef56be38dd8cb5ec0a70c61f8a632169fcb8"
	I1227 10:07:31.277249  426949 cri.go:96] found id: "d0b0eb38c19eab92cacf23e4694451181e8c28243a242a10d326b5b858be4470"
	I1227 10:07:31.277252  426949 cri.go:96] found id: "4ce51511884834f1f4f55745c7878be41ec49c18945051c3c04b7b118b00b869"
	I1227 10:07:31.277262  426949 cri.go:96] found id: "c85dfce27860abf2d02145e43c6d41b03c70e5c2b6e5bb8cf32868ab5d5a377f"
	I1227 10:07:31.277268  426949 cri.go:96] found id: ""
	I1227 10:07:31.277317  426949 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:07:31.853598  426949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:31.866820  426949 pause.go:52] kubelet running: false
	I1227 10:07:31.866909  426949 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:07:32.066008  426949 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:07:32.066083  426949 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:07:32.185402  426949 cri.go:96] found id: "d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc"
	I1227 10:07:32.185425  426949 cri.go:96] found id: "10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9"
	I1227 10:07:32.185429  426949 cri.go:96] found id: "a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c"
	I1227 10:07:32.185433  426949 cri.go:96] found id: "2c64d8e381df69d1958523f5822496beb2ba43eae987d8fae9b64ce57573225f"
	I1227 10:07:32.185437  426949 cri.go:96] found id: "8accf0af299bbeb42eaba7c41dfc1e952332c74448d6dab63424c3da6b9345c1"
	I1227 10:07:32.185441  426949 cri.go:96] found id: "83b21918cd3f4a0784408c901996870c341dd01f50c25e2bfd73792516ccd48b"
	I1227 10:07:32.185444  426949 cri.go:96] found id: "c1a19553104260faaaa5aa331a7ff93ae7f15092486abe8d8ca3f4b56ad77590"
	I1227 10:07:32.185456  426949 cri.go:96] found id: "3079e2ab8d34d01f38d7ae4115c0bb716f4d774566cb6851ad4a865b5d8c3196"
	I1227 10:07:32.185459  426949 cri.go:96] found id: "c7965a3e48bb8bf18d75a8539e56bfc922406ccd352f4b28d84d0a546f4e6c36"
	I1227 10:07:32.185466  426949 cri.go:96] found id: "28bffddc84703204ed115c0dffebcc0bca180c4c588838da7fc20688bc1238ff"
	I1227 10:07:32.185481  426949 cri.go:96] found id: "4a50eddc661a4f82f4965c2ad250ef56be38dd8cb5ec0a70c61f8a632169fcb8"
	I1227 10:07:32.185484  426949 cri.go:96] found id: "d0b0eb38c19eab92cacf23e4694451181e8c28243a242a10d326b5b858be4470"
	I1227 10:07:32.185488  426949 cri.go:96] found id: "4ce51511884834f1f4f55745c7878be41ec49c18945051c3c04b7b118b00b869"
	I1227 10:07:32.185491  426949 cri.go:96] found id: "c85dfce27860abf2d02145e43c6d41b03c70e5c2b6e5bb8cf32868ab5d5a377f"
	I1227 10:07:32.185494  426949 cri.go:96] found id: ""
	I1227 10:07:32.185543  426949 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:07:32.228362  426949 out.go:203] 
	W1227 10:07:32.231371  426949 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:07:32.231394  426949 out.go:285] * 
	* 
	W1227 10:07:32.234974  426949 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:07:32.241981  426949 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-708160 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-708160
helpers_test.go:244: (dbg) docker inspect pause-708160:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2",
	        "Created": "2025-12-27T10:06:16.721269401Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 420903,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:06:17.670118228Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2/hostname",
	        "HostsPath": "/var/lib/docker/containers/c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2/hosts",
	        "LogPath": "/var/lib/docker/containers/c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2/c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2-json.log",
	        "Name": "/pause-708160",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-708160:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-708160",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2",
	                "LowerDir": "/var/lib/docker/overlay2/741c65814d27cdf582109317b5f4f9c1f0b778d90e7c8b86054ff6731dbfcbb8-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/741c65814d27cdf582109317b5f4f9c1f0b778d90e7c8b86054ff6731dbfcbb8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/741c65814d27cdf582109317b5f4f9c1f0b778d90e7c8b86054ff6731dbfcbb8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/741c65814d27cdf582109317b5f4f9c1f0b778d90e7c8b86054ff6731dbfcbb8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-708160",
	                "Source": "/var/lib/docker/volumes/pause-708160/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-708160",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-708160",
	                "name.minikube.sigs.k8s.io": "pause-708160",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "41a346eaa039a76c3460efd0f80e0c2211c2f3dcb05fe48ee973c47ac3eabc2a",
	            "SandboxKey": "/var/run/docker/netns/41a346eaa039",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33323"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33324"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33327"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33325"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33326"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-708160": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:ee:40:a2:ce:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84318959cf08365bced04e97fd9728b32623d53629375dc1d00f31bb0f1a3997",
	                    "EndpointID": "1dc3f52539b0f6b570a6db6d31625d62d8c78bd54ce65d4fd34efaa2a78f25f6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-708160",
	                        "c91bd29bc7bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-708160 -n pause-708160
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-708160 -n pause-708160: exit status 2 (757.986464ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-708160 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-708160 logs -n 25: (1.980315318s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-823603                                                                                         │ multinode-823603            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ start   │ -p multinode-823603-m02 --driver=docker  --container-runtime=crio                                                │ multinode-823603-m02        │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ start   │ -p multinode-823603-m03 --driver=docker  --container-runtime=crio                                                │ multinode-823603-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:04 UTC │
	│ node    │ add -p multinode-823603                                                                                          │ multinode-823603            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ delete  │ -p multinode-823603-m03                                                                                          │ multinode-823603-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p multinode-823603                                                                                              │ multinode-823603            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p scheduled-stop-425603 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ stop    │ -p scheduled-stop-425603 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --cancel-scheduled                                                                      │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ delete  │ -p scheduled-stop-425603                                                                                         │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p insufficient-storage-217644 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-217644 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ delete  │ -p insufficient-storage-217644                                                                                   │ insufficient-storage-217644 │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ start   │ -p pause-708160 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-708160                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p missing-upgrade-651060 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-651060      │ jenkins │ v1.35.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p pause-708160 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-708160                │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p missing-upgrade-651060 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-651060      │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ pause   │ -p pause-708160 --alsologtostderr -v=5                                                                           │ pause-708160                │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:07:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:07:10.498145  425825 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:07:10.498368  425825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:10.498400  425825 out.go:374] Setting ErrFile to fd 2...
	I1227 10:07:10.498426  425825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:10.498783  425825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:07:10.499251  425825 out.go:368] Setting JSON to false
	I1227 10:07:10.500301  425825 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6584,"bootTime":1766823447,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:07:10.500410  425825 start.go:143] virtualization:  
	I1227 10:07:10.512208  425825 out.go:179] * [missing-upgrade-651060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:07:10.515342  425825 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:07:10.515425  425825 notify.go:221] Checking for updates...
	I1227 10:07:10.519059  425825 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:07:10.522424  425825 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:07:10.525352  425825 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:07:10.528338  425825 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:07:10.531377  425825 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:07:10.535091  425825 config.go:182] Loaded profile config "missing-upgrade-651060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 10:07:10.538718  425825 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 10:07:10.541771  425825 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:07:10.636155  425825 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:07:10.636279  425825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:10.750680  425825 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:07:10.736903935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:10.750806  425825 docker.go:319] overlay module found
	I1227 10:07:10.753970  425825 out.go:179] * Using the docker driver based on existing profile
	I1227 10:07:10.756724  425825 start.go:309] selected driver: docker
	I1227 10:07:10.756742  425825 start.go:928] validating driver "docker" against &{Name:missing-upgrade-651060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-651060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:10.756831  425825 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:07:10.757518  425825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:10.843166  425825 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:07:10.833310021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:10.843483  425825 cni.go:84] Creating CNI manager for ""
	I1227 10:07:10.843551  425825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:10.843598  425825 start.go:353] cluster config:
	{Name:missing-upgrade-651060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-651060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:10.846762  425825 out.go:179] * Starting "missing-upgrade-651060" primary control-plane node in "missing-upgrade-651060" cluster
	I1227 10:07:10.849660  425825 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:07:10.852625  425825 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:07:10.855481  425825 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 10:07:10.855522  425825 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:07:10.855533  425825 cache.go:65] Caching tarball of preloaded images
	I1227 10:07:10.855617  425825 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:07:10.855626  425825 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1227 10:07:10.855742  425825 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/missing-upgrade-651060/config.json ...
	I1227 10:07:10.855946  425825 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1227 10:07:10.883482  425825 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1227 10:07:10.883501  425825 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1227 10:07:10.883516  425825 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:07:10.883545  425825 start.go:360] acquireMachinesLock for missing-upgrade-651060: {Name:mkf297c1d32a94879e675043ae17ea8cf87ee97f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:07:10.883601  425825 start.go:364] duration metric: took 35.257µs to acquireMachinesLock for "missing-upgrade-651060"
	I1227 10:07:10.883620  425825 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:07:10.883626  425825 fix.go:54] fixHost starting: 
	I1227 10:07:10.883895  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:10.907371  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:10.907436  425825 fix.go:112] recreateIfNeeded on missing-upgrade-651060: state= err=unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:10.907457  425825 fix.go:117] machineExists: false. err=machine does not exist
	I1227 10:07:10.910797  425825 out.go:179] * docker "missing-upgrade-651060" container is missing, will recreate.
	I1227 10:07:09.095579  425058 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:07:09.095603  425058 machine.go:97] duration metric: took 6.789244034s to provisionDockerMachine
	I1227 10:07:09.095616  425058 start.go:293] postStartSetup for "pause-708160" (driver="docker")
	I1227 10:07:09.095628  425058 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:07:09.095689  425058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:07:09.095740  425058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-708160
	I1227 10:07:09.130342  425058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/pause-708160/id_rsa Username:docker}
	I1227 10:07:09.242411  425058 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:07:09.247366  425058 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:07:09.247395  425058 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:07:09.247406  425058 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:07:09.247461  425058 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:07:09.247538  425058 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:07:09.247644  425058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:07:09.257235  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:07:09.282521  425058 start.go:296] duration metric: took 186.889557ms for postStartSetup
	I1227 10:07:09.282695  425058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:07:09.282848  425058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-708160
	I1227 10:07:09.308981  425058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/pause-708160/id_rsa Username:docker}
	I1227 10:07:09.411593  425058 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:07:09.418976  425058 fix.go:56] duration metric: took 7.15992937s for fixHost
	I1227 10:07:09.419007  425058 start.go:83] releasing machines lock for "pause-708160", held for 7.159982729s
	I1227 10:07:09.419093  425058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-708160
	I1227 10:07:09.461090  425058 ssh_runner.go:195] Run: cat /version.json
	I1227 10:07:09.461162  425058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-708160
	I1227 10:07:09.461507  425058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:07:09.461568  425058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-708160
	I1227 10:07:09.514376  425058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/pause-708160/id_rsa Username:docker}
	I1227 10:07:09.526247  425058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/pause-708160/id_rsa Username:docker}
	I1227 10:07:09.676021  425058 ssh_runner.go:195] Run: systemctl --version
	I1227 10:07:09.797688  425058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:07:09.891197  425058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:07:09.897735  425058 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:07:09.897805  425058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:07:09.915544  425058 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:07:09.915565  425058 start.go:496] detecting cgroup driver to use...
	I1227 10:07:09.915605  425058 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:07:09.915653  425058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:07:09.941582  425058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:07:09.962306  425058 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:07:09.962388  425058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:07:09.983015  425058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:07:10.003443  425058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:07:10.181196  425058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:07:10.364987  425058 docker.go:234] disabling docker service ...
	I1227 10:07:10.365124  425058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:07:10.382786  425058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:07:10.397564  425058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:07:10.627640  425058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:07:10.829322  425058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:07:10.849134  425058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:07:10.865944  425058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:07:10.866076  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.875900  425058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:07:10.876099  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.889536  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.899091  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.908867  425058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:07:10.921037  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.932395  425058 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.942887  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.954829  425058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:07:10.963446  425058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:07:10.972739  425058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:11.116744  425058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:07:11.344017  425058 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:07:11.344097  425058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:07:11.348466  425058 start.go:574] Will wait 60s for crictl version
	I1227 10:07:11.348531  425058 ssh_runner.go:195] Run: which crictl
	I1227 10:07:11.352495  425058 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:07:11.377896  425058 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:07:11.378001  425058 ssh_runner.go:195] Run: crio --version
	I1227 10:07:11.408770  425058 ssh_runner.go:195] Run: crio --version
	I1227 10:07:11.440997  425058 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:07:11.443893  425058 cli_runner.go:164] Run: docker network inspect pause-708160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:11.460461  425058 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:07:11.464759  425058 kubeadm.go:884] updating cluster {Name:pause-708160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-708160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:07:11.464910  425058 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:11.464970  425058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:11.502789  425058 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:11.502814  425058 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:07:11.502870  425058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:11.528757  425058 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:11.528781  425058 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:07:11.528789  425058 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:07:11.528896  425058 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-708160 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-708160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:07:11.528981  425058 ssh_runner.go:195] Run: crio config
	I1227 10:07:11.600646  425058 cni.go:84] Creating CNI manager for ""
	I1227 10:07:11.600676  425058 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:11.600718  425058 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:07:11.600758  425058 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-708160 NodeName:pause-708160 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:07:11.600896  425058 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-708160"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:07:11.600972  425058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:07:11.610116  425058 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:07:11.610183  425058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:07:11.617747  425058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1227 10:07:11.630709  425058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:07:11.646516  425058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1227 10:07:11.683670  425058 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:07:11.690542  425058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:11.936675  425058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:11.966074  425058 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160 for IP: 192.168.76.2
	I1227 10:07:11.966101  425058 certs.go:195] generating shared ca certs ...
	I1227 10:07:11.966118  425058 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:11.966338  425058 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:07:11.966408  425058 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:07:11.966423  425058 certs.go:257] generating profile certs ...
	I1227 10:07:11.966571  425058 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/client.key
	I1227 10:07:11.966695  425058 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/apiserver.key.01d2a6ce
	I1227 10:07:11.966781  425058 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/proxy-client.key
	I1227 10:07:11.966971  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:07:11.967028  425058 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:07:11.967043  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:07:11.967104  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:07:11.967157  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:07:11.967217  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:07:11.967291  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:07:11.968091  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:07:12.007082  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:07:12.057326  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:07:12.084394  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:07:12.116508  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 10:07:12.159395  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:07:12.195479  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:07:12.226967  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:07:12.269075  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:07:12.301333  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:07:12.333834  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:07:12.362748  425058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:07:12.386500  425058 ssh_runner.go:195] Run: openssl version
	I1227 10:07:12.395687  425058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:07:12.404952  425058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:07:12.417773  425058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:07:12.422596  425058 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:07:12.422708  425058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:07:12.470003  425058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:07:12.481108  425058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:07:12.489442  425058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:07:12.499155  425058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:07:12.503482  425058 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:07:12.503602  425058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:07:12.547868  425058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:07:12.557909  425058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:12.566021  425058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:07:12.574628  425058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:12.580929  425058 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:12.581044  425058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:12.628172  425058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:07:12.636569  425058 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:07:12.642122  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:07:12.688342  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:07:12.737683  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:07:12.815459  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:07:12.890892  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:07:12.948859  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:07:13.005100  425058 kubeadm.go:401] StartCluster: {Name:pause-708160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-708160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:13.005240  425058 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:07:13.005313  425058 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:07:13.049717  425058 cri.go:96] found id: "d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc"
	I1227 10:07:13.049737  425058 cri.go:96] found id: "10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9"
	I1227 10:07:13.049741  425058 cri.go:96] found id: "a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c"
	I1227 10:07:13.049745  425058 cri.go:96] found id: "2c64d8e381df69d1958523f5822496beb2ba43eae987d8fae9b64ce57573225f"
	I1227 10:07:13.049748  425058 cri.go:96] found id: "8accf0af299bbeb42eaba7c41dfc1e952332c74448d6dab63424c3da6b9345c1"
	I1227 10:07:13.049752  425058 cri.go:96] found id: "83b21918cd3f4a0784408c901996870c341dd01f50c25e2bfd73792516ccd48b"
	I1227 10:07:13.049755  425058 cri.go:96] found id: "c1a19553104260faaaa5aa331a7ff93ae7f15092486abe8d8ca3f4b56ad77590"
	I1227 10:07:13.049758  425058 cri.go:96] found id: "3079e2ab8d34d01f38d7ae4115c0bb716f4d774566cb6851ad4a865b5d8c3196"
	I1227 10:07:13.049761  425058 cri.go:96] found id: "c7965a3e48bb8bf18d75a8539e56bfc922406ccd352f4b28d84d0a546f4e6c36"
	I1227 10:07:13.049769  425058 cri.go:96] found id: "28bffddc84703204ed115c0dffebcc0bca180c4c588838da7fc20688bc1238ff"
	I1227 10:07:13.049772  425058 cri.go:96] found id: "4a50eddc661a4f82f4965c2ad250ef56be38dd8cb5ec0a70c61f8a632169fcb8"
	I1227 10:07:13.049775  425058 cri.go:96] found id: "d0b0eb38c19eab92cacf23e4694451181e8c28243a242a10d326b5b858be4470"
	I1227 10:07:13.049788  425058 cri.go:96] found id: "4ce51511884834f1f4f55745c7878be41ec49c18945051c3c04b7b118b00b869"
	I1227 10:07:13.049791  425058 cri.go:96] found id: "c85dfce27860abf2d02145e43c6d41b03c70e5c2b6e5bb8cf32868ab5d5a377f"
	I1227 10:07:13.049794  425058 cri.go:96] found id: ""
	I1227 10:07:13.049852  425058 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:07:13.074393  425058 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:13Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:07:13.074478  425058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:07:13.087565  425058 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:07:13.087582  425058 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:07:13.087636  425058 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:07:13.100369  425058 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:07:13.101185  425058 kubeconfig.go:125] found "pause-708160" server: "https://192.168.76.2:8443"
	I1227 10:07:13.102164  425058 kapi.go:59] client config for pause-708160: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 10:07:13.102873  425058 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 10:07:13.103004  425058 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 10:07:13.103034  425058 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 10:07:13.103055  425058 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 10:07:13.103086  425058 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 10:07:13.103115  425058 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 10:07:13.103474  425058 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:07:13.117269  425058 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 10:07:13.117301  425058 kubeadm.go:602] duration metric: took 29.712459ms to restartPrimaryControlPlane
	I1227 10:07:13.117310  425058 kubeadm.go:403] duration metric: took 112.223523ms to StartCluster
	I1227 10:07:13.117326  425058 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:13.117395  425058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:07:13.118294  425058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:13.118516  425058 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:07:13.118985  425058 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:07:13.119203  425058 config.go:182] Loaded profile config "pause-708160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:13.122286  425058 out.go:179] * Enabled addons: 
	I1227 10:07:13.122406  425058 out.go:179] * Verifying Kubernetes components...
	I1227 10:07:10.913675  425825 delete.go:124] DEMOLISHING missing-upgrade-651060 ...
	I1227 10:07:10.913779  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:10.929561  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	W1227 10:07:10.929620  425825 stop.go:83] unable to get state: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:10.929638  425825 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:10.930088  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:10.949733  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:10.949808  425825 delete.go:82] Unable to get host status for missing-upgrade-651060, assuming it has already been deleted: state: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:10.949876  425825 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-651060
	W1227 10:07:10.974774  425825 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-651060 returned with exit code 1
	I1227 10:07:10.974805  425825 kic.go:371] could not find the container missing-upgrade-651060 to remove it. will try anyways
	I1227 10:07:10.974856  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:10.993735  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	W1227 10:07:10.993807  425825 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:10.993871  425825 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-651060 /bin/bash -c "sudo init 0"
	W1227 10:07:11.037628  425825 cli_runner.go:211] docker exec --privileged -t missing-upgrade-651060 /bin/bash -c "sudo init 0" returned with exit code 1
	I1227 10:07:11.037677  425825 oci.go:659] error shutdown missing-upgrade-651060: docker exec --privileged -t missing-upgrade-651060 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:12.037831  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:12.059392  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:12.059450  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:12.059468  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:12.059515  425825 retry.go:84] will retry after 700ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:12.759410  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:12.790089  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:12.790146  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:12.790156  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:13.555154  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:13.591247  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:13.591352  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:13.591372  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:15.019623  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:15.051920  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:15.052006  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:15.052024  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:13.125252  425058 addons.go:530] duration metric: took 6.26006ms for enable addons: enabled=[]
	I1227 10:07:13.125377  425058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:13.354013  425058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:13.371680  425058 node_ready.go:35] waiting up to 6m0s for node "pause-708160" to be "Ready" ...
	I1227 10:07:15.369575  425058 node_ready.go:49] node "pause-708160" is "Ready"
	I1227 10:07:15.369602  425058 node_ready.go:38] duration metric: took 1.997845068s for node "pause-708160" to be "Ready" ...
	I1227 10:07:15.369615  425058 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:07:15.369673  425058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:07:15.386014  425058 api_server.go:72] duration metric: took 2.267467461s to wait for apiserver process to appear ...
	I1227 10:07:15.386037  425058 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:07:15.386056  425058 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:07:15.426393  425058 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 10:07:15.426467  425058 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 10:07:15.887117  425058 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:07:15.896358  425058 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 10:07:15.896434  425058 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 10:07:16.386165  425058 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:07:16.395160  425058 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:07:16.396442  425058 api_server.go:141] control plane version: v1.35.0
	I1227 10:07:16.396477  425058 api_server.go:131] duration metric: took 1.010433256s to wait for apiserver health ...
	I1227 10:07:16.396487  425058 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:07:16.400255  425058 system_pods.go:59] 7 kube-system pods found
	I1227 10:07:16.400310  425058 system_pods.go:61] "coredns-7d764666f9-4m4gm" [70dcfb3e-0b6d-48dd-a817-849c6ffbda06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:07:16.400325  425058 system_pods.go:61] "etcd-pause-708160" [c207facb-d7d8-44f5-9551-b50f1312d45f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:07:16.400338  425058 system_pods.go:61] "kindnet-h9hk6" [21ce871d-d4c7-4ac3-8459-4154f198693b] Running
	I1227 10:07:16.400345  425058 system_pods.go:61] "kube-apiserver-pause-708160" [057197e0-abe1-41b2-a36d-1a48cb3c7f82] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:07:16.400366  425058 system_pods.go:61] "kube-controller-manager-pause-708160" [b62c56f5-dba8-4f4a-a42a-c4a4e24b0683] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:07:16.400372  425058 system_pods.go:61] "kube-proxy-2mnpk" [9865d55d-22e8-4301-9b7a-497ee437a59a] Running
	I1227 10:07:16.400380  425058 system_pods.go:61] "kube-scheduler-pause-708160" [6d4826fa-7096-43ea-907d-57c97b93d482] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:07:16.400396  425058 system_pods.go:74] duration metric: took 3.899591ms to wait for pod list to return data ...
	I1227 10:07:16.400411  425058 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:07:16.403223  425058 default_sa.go:45] found service account: "default"
	I1227 10:07:16.403252  425058 default_sa.go:55] duration metric: took 2.834237ms for default service account to be created ...
	I1227 10:07:16.403264  425058 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:07:16.406524  425058 system_pods.go:86] 7 kube-system pods found
	I1227 10:07:16.406562  425058 system_pods.go:89] "coredns-7d764666f9-4m4gm" [70dcfb3e-0b6d-48dd-a817-849c6ffbda06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:07:16.406584  425058 system_pods.go:89] "etcd-pause-708160" [c207facb-d7d8-44f5-9551-b50f1312d45f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:07:16.406591  425058 system_pods.go:89] "kindnet-h9hk6" [21ce871d-d4c7-4ac3-8459-4154f198693b] Running
	I1227 10:07:16.406602  425058 system_pods.go:89] "kube-apiserver-pause-708160" [057197e0-abe1-41b2-a36d-1a48cb3c7f82] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:07:16.406613  425058 system_pods.go:89] "kube-controller-manager-pause-708160" [b62c56f5-dba8-4f4a-a42a-c4a4e24b0683] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:07:16.406618  425058 system_pods.go:89] "kube-proxy-2mnpk" [9865d55d-22e8-4301-9b7a-497ee437a59a] Running
	I1227 10:07:16.406625  425058 system_pods.go:89] "kube-scheduler-pause-708160" [6d4826fa-7096-43ea-907d-57c97b93d482] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:07:16.406639  425058 system_pods.go:126] duration metric: took 3.36943ms to wait for k8s-apps to be running ...
	I1227 10:07:16.406647  425058 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:07:16.406707  425058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:16.420435  425058 system_svc.go:56] duration metric: took 13.77742ms WaitForService to wait for kubelet
	I1227 10:07:16.420469  425058 kubeadm.go:587] duration metric: took 3.301926955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:07:16.420488  425058 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:07:16.424468  425058 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:07:16.424502  425058 node_conditions.go:123] node cpu capacity is 2
	I1227 10:07:16.424516  425058 node_conditions.go:105] duration metric: took 4.022497ms to run NodePressure ...
	I1227 10:07:16.424530  425058 start.go:242] waiting for startup goroutines ...
	I1227 10:07:16.424537  425058 start.go:247] waiting for cluster config update ...
	I1227 10:07:16.424546  425058 start.go:256] writing updated cluster config ...
	I1227 10:07:16.424857  425058 ssh_runner.go:195] Run: rm -f paused
	I1227 10:07:16.428622  425058 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:07:16.429289  425058 kapi.go:59] client config for pause-708160: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 10:07:16.432650  425058 pod_ready.go:83] waiting for pod "coredns-7d764666f9-4m4gm" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:16.106830  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:16.128863  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:16.130432  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:16.130459  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:19.376122  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:19.391330  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:19.391403  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:19.391418  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:19.391452  425825 retry.go:84] will retry after 2.3s: couldn't verify container is exited. %v: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	W1227 10:07:18.441395  425058 pod_ready.go:104] pod "coredns-7d764666f9-4m4gm" is not "Ready", error: <nil>
	W1227 10:07:20.939578  425058 pod_ready.go:104] pod "coredns-7d764666f9-4m4gm" is not "Ready", error: <nil>
	I1227 10:07:21.693534  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:21.709646  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:21.709718  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:21.709731  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:25.390461  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:25.408840  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:25.408938  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:25.408962  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:25.409019  425825 oci.go:88] couldn't shut down missing-upgrade-651060 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	 
	I1227 10:07:25.409113  425825 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-651060
	I1227 10:07:25.425026  425825 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-651060
	W1227 10:07:25.448108  425825 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-651060 returned with exit code 1
	I1227 10:07:25.448208  425825 cli_runner.go:164] Run: docker network inspect missing-upgrade-651060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:25.471378  425825 cli_runner.go:164] Run: docker network rm missing-upgrade-651060
	I1227 10:07:25.573252  425825 fix.go:124] Sleeping 1 second for extra luck!
	I1227 10:07:26.573413  425825 start.go:125] createHost starting for "" (driver="docker")
	W1227 10:07:22.939625  425058 pod_ready.go:104] pod "coredns-7d764666f9-4m4gm" is not "Ready", error: <nil>
	I1227 10:07:23.938048  425058 pod_ready.go:94] pod "coredns-7d764666f9-4m4gm" is "Ready"
	I1227 10:07:23.938081  425058 pod_ready.go:86] duration metric: took 7.50540505s for pod "coredns-7d764666f9-4m4gm" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:23.941257  425058 pod_ready.go:83] waiting for pod "etcd-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:23.946029  425058 pod_ready.go:94] pod "etcd-pause-708160" is "Ready"
	I1227 10:07:23.946055  425058 pod_ready.go:86] duration metric: took 4.770088ms for pod "etcd-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:23.948314  425058 pod_ready.go:83] waiting for pod "kube-apiserver-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:07:25.953647  425058 pod_ready.go:104] pod "kube-apiserver-pause-708160" is not "Ready", error: <nil>
	I1227 10:07:26.955161  425058 pod_ready.go:94] pod "kube-apiserver-pause-708160" is "Ready"
	I1227 10:07:26.955186  425058 pod_ready.go:86] duration metric: took 3.006847816s for pod "kube-apiserver-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:26.958456  425058 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:28.963685  425058 pod_ready.go:94] pod "kube-controller-manager-pause-708160" is "Ready"
	I1227 10:07:28.963712  425058 pod_ready.go:86] duration metric: took 2.005232944s for pod "kube-controller-manager-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:28.965931  425058 pod_ready.go:83] waiting for pod "kube-proxy-2mnpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:28.970080  425058 pod_ready.go:94] pod "kube-proxy-2mnpk" is "Ready"
	I1227 10:07:28.970109  425058 pod_ready.go:86] duration metric: took 4.147421ms for pod "kube-proxy-2mnpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:28.972244  425058 pod_ready.go:83] waiting for pod "kube-scheduler-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:29.337939  425058 pod_ready.go:94] pod "kube-scheduler-pause-708160" is "Ready"
	I1227 10:07:29.337963  425058 pod_ready.go:86] duration metric: took 365.691465ms for pod "kube-scheduler-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:29.337976  425058 pod_ready.go:40] duration metric: took 12.909319139s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:07:29.418526  425058 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:07:29.434430  425058 out.go:203] 
	W1227 10:07:29.444778  425058 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:07:29.453083  425058 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:07:29.461333  425058 out.go:179] * Done! kubectl is now configured to use "pause-708160" cluster and "default" namespace by default
	I1227 10:07:26.576510  425825 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:07:26.576636  425825 start.go:159] libmachine.API.Create for "missing-upgrade-651060" (driver="docker")
	I1227 10:07:26.576676  425825 client.go:173] LocalClient.Create starting
	I1227 10:07:26.576750  425825 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:07:26.576798  425825 main.go:144] libmachine: Decoding PEM data...
	I1227 10:07:26.576819  425825 main.go:144] libmachine: Parsing certificate...
	I1227 10:07:26.576875  425825 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:07:26.576898  425825 main.go:144] libmachine: Decoding PEM data...
	I1227 10:07:26.576915  425825 main.go:144] libmachine: Parsing certificate...
	I1227 10:07:26.577175  425825 cli_runner.go:164] Run: docker network inspect missing-upgrade-651060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:07:26.592785  425825 cli_runner.go:211] docker network inspect missing-upgrade-651060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:07:26.592878  425825 network_create.go:284] running [docker network inspect missing-upgrade-651060] to gather additional debugging logs...
	I1227 10:07:26.592901  425825 cli_runner.go:164] Run: docker network inspect missing-upgrade-651060
	W1227 10:07:26.608047  425825 cli_runner.go:211] docker network inspect missing-upgrade-651060 returned with exit code 1
	I1227 10:07:26.608079  425825 network_create.go:287] error running [docker network inspect missing-upgrade-651060]: docker network inspect missing-upgrade-651060: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-651060 not found
	I1227 10:07:26.608091  425825 network_create.go:289] output of [docker network inspect missing-upgrade-651060]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-651060 not found
	
	** /stderr **
	I1227 10:07:26.608210  425825 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:26.624025  425825 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:07:26.624513  425825 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:07:26.624819  425825 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:07:26.625211  425825 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-84318959cf08 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:f4:2d:46:56:6a} reservation:<nil>}
	I1227 10:07:26.625690  425825 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bfed90}
	I1227 10:07:26.625714  425825 network_create.go:124] attempt to create docker network missing-upgrade-651060 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 10:07:26.625777  425825 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-651060 missing-upgrade-651060
	I1227 10:07:26.691032  425825 network_create.go:108] docker network missing-upgrade-651060 192.168.85.0/24 created
	I1227 10:07:26.691079  425825 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-651060" container
	I1227 10:07:26.691193  425825 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:07:26.706861  425825 cli_runner.go:164] Run: docker volume create missing-upgrade-651060 --label name.minikube.sigs.k8s.io=missing-upgrade-651060 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:07:26.722348  425825 oci.go:103] Successfully created a docker volume missing-upgrade-651060
	I1227 10:07:26.722452  425825 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-651060-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-651060 --entrypoint /usr/bin/test -v missing-upgrade-651060:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1227 10:07:27.134452  425825 oci.go:107] Successfully prepared a docker volume missing-upgrade-651060
	I1227 10:07:27.134515  425825 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 10:07:27.134526  425825 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:07:27.134592  425825 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-651060:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.865741298Z" level=info msg="Started container" PID=2250 containerID=2c64d8e381df69d1958523f5822496beb2ba43eae987d8fae9b64ce57573225f description=kube-system/kube-scheduler-pause-708160/kube-scheduler id=5a583942-548c-4b8b-b7a3-54743f4d4973 name=/runtime.v1.RuntimeService/StartContainer sandboxID=50c3315b3c6ed8363a3bd429dfc2c87a9626c1dd558b8b152357d91cbac700f5
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.907544255Z" level=info msg="Created container 10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9: kube-system/etcd-pause-708160/etcd" id=25707883-d0ba-4be1-80e6-32647c9a70a7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.912722141Z" level=info msg="Starting container: 10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9" id=86ca3d00-1a8c-41bf-b763-616728f7d15a name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.916631004Z" level=info msg="Started container" PID=2269 containerID=10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9 description=kube-system/etcd-pause-708160/etcd id=86ca3d00-1a8c-41bf-b763-616728f7d15a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d9c35b9f910442a460fcf9bc21007ca29dfe8f8e687792952fd2aa1e13b1415c
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.920268349Z" level=info msg="Created container d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc: kube-system/kube-apiserver-pause-708160/kube-apiserver" id=1039f901-0295-40ac-b614-7b46efbd7372 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.921207327Z" level=info msg="Starting container: d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc" id=a1988509-a6c0-46ba-84aa-657d4683efac name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.955355603Z" level=info msg="Started container" PID=2290 containerID=d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc description=kube-system/kube-apiserver-pause-708160/kube-apiserver id=a1988509-a6c0-46ba-84aa-657d4683efac name=/runtime.v1.RuntimeService/StartContainer sandboxID=854a133e8f3029a9e922318723ade9273d6b3951243f56c1d9aa10ec4770af1a
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.987288461Z" level=info msg="Created container a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c: kube-system/kube-controller-manager-pause-708160/kube-controller-manager" id=463d4502-4b4f-452e-b0dc-1870f06219fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.987931075Z" level=info msg="Starting container: a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c" id=eea01752-38f1-43ed-961c-c70fc502b963 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.990331963Z" level=info msg="Started container" PID=2278 containerID=a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c description=kube-system/kube-controller-manager-pause-708160/kube-controller-manager id=eea01752-38f1-43ed-961c-c70fc502b963 name=/runtime.v1.RuntimeService/StartContainer sandboxID=835d71bb40a29fa17d6528d2aeb01aa55148bafb27ec5b8dcd092b1359bfc4b8
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.091706011Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.095611978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.095848977Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.095886958Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.099223485Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.099260572Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.099286271Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.102723254Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.102760013Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.102784423Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.106105975Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.106145377Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.106169344Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.110189412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.110229101Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d2ec260d7f42c       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     22 seconds ago       Running             kube-apiserver            1                   854a133e8f302       kube-apiserver-pause-708160            kube-system
	10a5ff43b9ef4       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     22 seconds ago       Running             etcd                      1                   d9c35b9f91044       etcd-pause-708160                      kube-system
	a5fb483d50cc0       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     22 seconds ago       Running             kube-controller-manager   1                   835d71bb40a29       kube-controller-manager-pause-708160   kube-system
	2c64d8e381df6       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     22 seconds ago       Running             kube-scheduler            1                   50c3315b3c6ed       kube-scheduler-pause-708160            kube-system
	8accf0af299bb       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     22 seconds ago       Running             coredns                   1                   8d8e403ff3d6e       coredns-7d764666f9-4m4gm               kube-system
	83b21918cd3f4       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     22 seconds ago       Running             kube-proxy                1                   df253bbf36ccc       kube-proxy-2mnpk                       kube-system
	c1a1955310426       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     22 seconds ago       Running             kindnet-cni               1                   5f987979c72c2       kindnet-h9hk6                          kube-system
	3079e2ab8d34d       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     35 seconds ago       Exited              coredns                   0                   8d8e403ff3d6e       coredns-7d764666f9-4m4gm               kube-system
	c7965a3e48bb8       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   46 seconds ago       Exited              kindnet-cni               0                   5f987979c72c2       kindnet-h9hk6                          kube-system
	28bffddc84703       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     49 seconds ago       Exited              kube-proxy                0                   df253bbf36ccc       kube-proxy-2mnpk                       kube-system
	4a50eddc661a4       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     About a minute ago   Exited              kube-scheduler            0                   50c3315b3c6ed       kube-scheduler-pause-708160            kube-system
	d0b0eb38c19ea       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     About a minute ago   Exited              kube-apiserver            0                   854a133e8f302       kube-apiserver-pause-708160            kube-system
	4ce5151188483       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     About a minute ago   Exited              kube-controller-manager   0                   835d71bb40a29       kube-controller-manager-pause-708160   kube-system
	c85dfce27860a       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     About a minute ago   Exited              etcd                      0                   d9c35b9f91044       etcd-pause-708160                      kube-system
	
	
	==> coredns [3079e2ab8d34d01f38d7ae4115c0bb716f4d774566cb6851ad4a865b5d8c3196] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60027 - 3172 "HINFO IN 6383244010125079538.4169238436832107798. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019344903s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8accf0af299bbeb42eaba7c41dfc1e952332c74448d6dab63424c3da6b9345c1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43326 - 44979 "HINFO IN 589235669207316467.2357734862419678262. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02472733s
	
	
	==> describe nodes <==
	Name:               pause-708160
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-708160
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=pause-708160
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_06_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:06:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-708160
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:07:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:07:20 +0000   Sat, 27 Dec 2025 10:06:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:07:20 +0000   Sat, 27 Dec 2025 10:06:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:07:20 +0000   Sat, 27 Dec 2025 10:06:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:07:20 +0000   Sat, 27 Dec 2025 10:06:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-708160
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                0aed8aa6-6d51-4c4b-af7d-5533ba1bacb6
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-4m4gm                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     50s
	  kube-system                 etcd-pause-708160                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         57s
	  kube-system                 kindnet-h9hk6                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      50s
	  kube-system                 kube-apiserver-pause-708160             250m (12%)    0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-controller-manager-pause-708160    200m (10%)    0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-proxy-2mnpk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-scheduler-pause-708160             100m (5%)     0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  51s   node-controller  Node pause-708160 event: Registered Node pause-708160 in Controller
	  Normal  RegisteredNode  16s   node-controller  Node pause-708160 event: Registered Node pause-708160 in Controller
	
	
	==> dmesg <==
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:42] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[  +3.379616] overlayfs: idmapped layers are currently not supported
	[ +26.881821] overlayfs: idmapped layers are currently not supported
	[Dec27 09:44] overlayfs: idmapped layers are currently not supported
	[Dec27 09:45] overlayfs: idmapped layers are currently not supported
	[  +3.382865] overlayfs: idmapped layers are currently not supported
	[Dec27 09:53] overlayfs: idmapped layers are currently not supported
	[Dec27 09:57] overlayfs: idmapped layers are currently not supported
	[Dec27 09:58] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9] <==
	{"level":"info","ts":"2025-12-27T10:07:12.103659Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:07:12.103679Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:07:12.105622Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T10:07:12.105735Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:07:12.105809Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:07:12.106660Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:07:12.140040Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:07:12.553826Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:12.553931Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:12.553998Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:12.554039Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:07:12.554088Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:12.559373Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:12.559503Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:07:12.559548Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:12.559584Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:12.563321Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-708160 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:07:12.563499Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:07:12.563644Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:07:12.564540Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:07:12.588278Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:07:12.588357Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:07:12.596888Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:07:12.624426Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:07:12.625290Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [c85dfce27860abf2d02145e43c6d41b03c70e5c2b6e5bb8cf32868ab5d5a377f] <==
	{"level":"info","ts":"2025-12-27T10:06:32.561590Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T10:06:32.563616Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:06:32.563674Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:06:32.575461Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:06:32.591385Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:06:32.596288Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:06:32.600211Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:07:03.739603Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-27T10:07:03.739656Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-708160","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-27T10:07:03.739753Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T10:07:04.043645Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T10:07:04.043804Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T10:07:04.043885Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-27T10:07:04.044061Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-27T10:07:04.044116Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-27T10:07:04.044458Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-27T10:07:04.044530Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T10:07:04.044568Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-27T10:07:04.044405Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-27T10:07:04.044676Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T10:07:04.044718Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T10:07:04.047354Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-27T10:07:04.047493Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T10:07:04.047557Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:07:04.047587Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-708160","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 10:07:34 up  1:50,  0 user,  load average: 2.87, 2.32, 2.11
	Linux pause-708160 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c1a19553104260faaaa5aa331a7ff93ae7f15092486abe8d8ca3f4b56ad77590] <==
	I1227 10:07:11.832490       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:07:11.836178       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:07:11.836347       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:07:11.836362       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:07:11.836378       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:07:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:07:12.107229       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:07:12.107317       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:07:12.107361       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1227 10:07:12.116149       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:07:12.116704       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1227 10:07:12.117021       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:07:12.123497       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:07:12.140318       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 10:07:15.508649       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:07:15.508759       1 metrics.go:72] Registering metrics
	I1227 10:07:15.508838       1 controller.go:711] "Syncing nftables rules"
	I1227 10:07:22.091225       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:07:22.091403       1 main.go:301] handling current node
	I1227 10:07:32.092706       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:07:32.092791       1 main.go:301] handling current node
	
	
	==> kindnet [c7965a3e48bb8bf18d75a8539e56bfc922406ccd352f4b28d84d0a546f4e6c36] <==
	I1227 10:06:48.018797       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:06:48.019846       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:06:48.020084       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:06:48.020134       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:06:48.020178       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:06:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:06:48.223866       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:06:48.223991       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:06:48.224035       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:06:48.225902       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:06:48.525178       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:06:48.525303       1 metrics.go:72] Registering metrics
	I1227 10:06:48.525405       1 controller.go:711] "Syncing nftables rules"
	I1227 10:06:58.223415       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:06:58.223468       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d0b0eb38c19eab92cacf23e4694451181e8c28243a242a10d326b5b858be4470] <==
	W1227 10:07:03.810734       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810763       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810808       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810837       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810863       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810889       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810919       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810948       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810974       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811002       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811031       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811057       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811103       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811134       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811162       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811189       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811220       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811248       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811276       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811304       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.817369       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.817471       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.817560       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.817660       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.817756       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc] <==
	I1227 10:07:15.437651       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:07:15.450098       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 10:07:15.450136       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 10:07:15.450144       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:07:15.450234       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:07:15.450325       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:15.450345       1 policy_source.go:248] refreshing policies
	I1227 10:07:15.450489       1 aggregator.go:187] initial CRD sync complete...
	I1227 10:07:15.450505       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 10:07:15.450510       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:07:15.450515       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:07:15.450554       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:07:15.451130       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:15.451337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:07:15.458912       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:07:15.469117       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:07:15.524262       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:07:15.530270       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1227 10:07:15.537288       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:07:16.117483       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:07:17.298114       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:07:18.637769       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:07:18.840428       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:07:18.887399       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:07:18.989937       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [4ce51511884834f1f4f55745c7878be41ec49c18945051c3c04b7b118b00b869] <==
	I1227 10:06:43.137998       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138005       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138012       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138018       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138026       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138040       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138124       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138248       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138700       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138776       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.131615       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.158528       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 10:06:43.158613       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-708160"
	I1227 10:06:43.158677       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 10:06:43.154597       1 range_allocator.go:433] "Set node PodCIDR" node="pause-708160" podCIDRs=["10.244.0.0/24"]
	I1227 10:06:43.131621       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.131627       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.131633       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.172633       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.174761       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:43.329650       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.338788       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.338821       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:06:43.338827       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:07:03.160993       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c] <==
	I1227 10:07:18.526216       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.540391       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.540390       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.540412       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.540423       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.542689       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.543878       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.543912       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:07:18.543931       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:07:18.543935       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:18.543939       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545179       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545741       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545803       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545737       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545766       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545783       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545789       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545775       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.548859       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.551038       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.551075       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:07:18.551082       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:07:18.599258       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.644940       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [28bffddc84703204ed115c0dffebcc0bca180c4c588838da7fc20688bc1238ff] <==
	I1227 10:06:44.746649       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:06:44.897402       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:44.998280       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:44.998338       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:06:44.998415       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:06:45.211406       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:06:45.211472       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:06:45.220571       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:06:45.224481       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:06:45.224613       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:06:45.230037       1 config.go:200] "Starting service config controller"
	I1227 10:06:45.230134       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:06:45.230185       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:06:45.230216       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:06:45.230266       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:06:45.230299       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:06:45.240990       1 config.go:309] "Starting node config controller"
	I1227 10:06:45.241105       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:06:45.241144       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:06:45.331287       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:06:45.331304       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:06:45.331322       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [83b21918cd3f4a0784408c901996870c341dd01f50c25e2bfd73792516ccd48b] <==
	I1227 10:07:12.648220       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:07:13.736646       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:15.541291       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:15.541432       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:07:15.551828       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:07:15.664273       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:07:15.664408       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:07:15.673075       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:07:15.673495       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:07:15.673508       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:07:15.679498       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:07:15.679525       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:07:15.679819       1 config.go:200] "Starting service config controller"
	I1227 10:07:15.679839       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:07:15.680202       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:07:15.680219       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:07:15.680609       1 config.go:309] "Starting node config controller"
	I1227 10:07:15.680629       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:07:15.680636       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:07:15.780019       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:07:15.780155       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 10:07:15.783670       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2c64d8e381df69d1958523f5822496beb2ba43eae987d8fae9b64ce57573225f] <==
	I1227 10:07:15.365122       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:07:15.365157       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:07:15.367285       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:07:15.367409       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:07:15.367428       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:15.367443       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 10:07:15.433152       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:07:15.433289       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:07:15.433359       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:07:15.433428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:07:15.433492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:07:15.433547       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 10:07:15.433616       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:07:15.433688       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:07:15.433757       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:07:15.433823       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:07:15.433883       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:07:15.433934       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:07:15.434076       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:07:15.434175       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:07:15.434278       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:07:15.434506       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:07:15.434579       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:07:15.437218       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1227 10:07:15.469451       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [4a50eddc661a4f82f4965c2ad250ef56be38dd8cb5ec0a70c61f8a632169fcb8] <==
	E1227 10:06:35.554191       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:06:36.379701       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:06:36.501521       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:06:36.501683       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:06:36.505311       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:06:36.583299       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:06:36.623597       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:06:36.657157       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:06:36.670550       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:06:36.692075       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:06:36.782285       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:06:36.784986       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:06:36.852471       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:06:36.907621       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:06:36.920927       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:06:36.977300       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:06:36.988672       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:06:37.008068       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	I1227 10:06:39.003457       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:03.742895       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1227 10:07:03.753182       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1227 10:07:03.753223       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1227 10:07:03.753253       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:07:03.753466       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1227 10:07:03.753493       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.176518    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-4m4gm\" is forbidden: User \"system:node:pause-708160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-708160' and this object" podUID="70dcfb3e-0b6d-48dd-a817-849c6ffbda06" pod="kube-system/coredns-7d764666f9-4m4gm"
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.259960    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-708160\" is forbidden: User \"system:node:pause-708160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-708160' and this object" podUID="8437bdea109a5d63c3096dca1ea29eca" pod="kube-system/kube-scheduler-pause-708160"
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.342586    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-708160\" is forbidden: User \"system:node:pause-708160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-708160' and this object" podUID="3978c97bc810411d8a12a7fb5530b6a6" pod="kube-system/kube-controller-manager-pause-708160"
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.412080    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-708160\" is forbidden: User \"system:node:pause-708160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-708160' and this object" podUID="93e8f5bd9542f5694bdd1ee4733121e1" pod="kube-system/kube-apiserver-pause-708160"
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.432520    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-708160\" is forbidden: User \"system:node:pause-708160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-708160' and this object" podUID="35a0d8cfc40a7b855a6e10d8690f470a" pod="kube-system/etcd-pause-708160"
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.436725    1300 status_manager.go:1045] "Failed to get status for pod" err=<
	Dec 27 10:07:15 pause-708160 kubelet[1300]:         pods "kindnet-h9hk6" is forbidden: User "system:node:pause-708160" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-708160' and this object
	Dec 27 10:07:15 pause-708160 kubelet[1300]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	Dec 27 10:07:15 pause-708160 kubelet[1300]:  > podUID="21ce871d-d4c7-4ac3-8459-4154f198693b" pod="kube-system/kindnet-h9hk6"
	Dec 27 10:07:16 pause-708160 kubelet[1300]: E1227 10:07:16.421058    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-708160" containerName="kube-apiserver"
	Dec 27 10:07:17 pause-708160 kubelet[1300]: E1227 10:07:17.777661    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-708160" containerName="kube-scheduler"
	Dec 27 10:07:18 pause-708160 kubelet[1300]: E1227 10:07:18.677797    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-708160" containerName="kube-controller-manager"
	Dec 27 10:07:19 pause-708160 kubelet[1300]: W1227 10:07:19.655056    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 27 10:07:22 pause-708160 kubelet[1300]: E1227 10:07:22.764957    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-708160" containerName="etcd"
	Dec 27 10:07:22 pause-708160 kubelet[1300]: E1227 10:07:22.870565    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-708160" containerName="etcd"
	Dec 27 10:07:23 pause-708160 kubelet[1300]: E1227 10:07:23.851608    1300 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-4m4gm" containerName="coredns"
	Dec 27 10:07:26 pause-708160 kubelet[1300]: E1227 10:07:26.430294    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-708160" containerName="kube-apiserver"
	Dec 27 10:07:26 pause-708160 kubelet[1300]: E1227 10:07:26.886822    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-708160" containerName="kube-apiserver"
	Dec 27 10:07:27 pause-708160 kubelet[1300]: E1227 10:07:27.787054    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-708160" containerName="kube-scheduler"
	Dec 27 10:07:27 pause-708160 kubelet[1300]: E1227 10:07:27.889353    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-708160" containerName="kube-scheduler"
	Dec 27 10:07:28 pause-708160 kubelet[1300]: E1227 10:07:28.686893    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-708160" containerName="kube-controller-manager"
	Dec 27 10:07:29 pause-708160 kubelet[1300]: W1227 10:07:29.662542    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 27 10:07:30 pause-708160 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:07:30 pause-708160 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:07:30 pause-708160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-708160 -n pause-708160
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-708160 -n pause-708160: exit status 2 (455.802943ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-708160 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-708160
helpers_test.go:244: (dbg) docker inspect pause-708160:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2",
	        "Created": "2025-12-27T10:06:16.721269401Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 420903,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:06:17.670118228Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2/hostname",
	        "HostsPath": "/var/lib/docker/containers/c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2/hosts",
	        "LogPath": "/var/lib/docker/containers/c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2/c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2-json.log",
	        "Name": "/pause-708160",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-708160:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-708160",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c91bd29bc7bce4c61e7dc72adfd3d775cd600aacf7066ed247a6dec56baa1fe2",
	                "LowerDir": "/var/lib/docker/overlay2/741c65814d27cdf582109317b5f4f9c1f0b778d90e7c8b86054ff6731dbfcbb8-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/741c65814d27cdf582109317b5f4f9c1f0b778d90e7c8b86054ff6731dbfcbb8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/741c65814d27cdf582109317b5f4f9c1f0b778d90e7c8b86054ff6731dbfcbb8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/741c65814d27cdf582109317b5f4f9c1f0b778d90e7c8b86054ff6731dbfcbb8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-708160",
	                "Source": "/var/lib/docker/volumes/pause-708160/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-708160",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-708160",
	                "name.minikube.sigs.k8s.io": "pause-708160",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "41a346eaa039a76c3460efd0f80e0c2211c2f3dcb05fe48ee973c47ac3eabc2a",
	            "SandboxKey": "/var/run/docker/netns/41a346eaa039",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33323"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33324"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33327"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33325"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33326"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-708160": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:ee:40:a2:ce:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84318959cf08365bced04e97fd9728b32623d53629375dc1d00f31bb0f1a3997",
	                    "EndpointID": "1dc3f52539b0f6b570a6db6d31625d62d8c78bd54ce65d4fd34efaa2a78f25f6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-708160",
	                        "c91bd29bc7bc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-708160 -n pause-708160
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-708160 -n pause-708160: exit status 2 (436.12599ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-708160 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-708160 logs -n 25: (1.658495664s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-823603                                                                                         │ multinode-823603            │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ start   │ -p multinode-823603-m02 --driver=docker  --container-runtime=crio                                                │ multinode-823603-m02        │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │                     │
	│ start   │ -p multinode-823603-m03 --driver=docker  --container-runtime=crio                                                │ multinode-823603-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 10:03 UTC │ 27 Dec 25 10:04 UTC │
	│ node    │ add -p multinode-823603                                                                                          │ multinode-823603            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ delete  │ -p multinode-823603-m03                                                                                          │ multinode-823603-m03        │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ delete  │ -p multinode-823603                                                                                              │ multinode-823603            │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ start   │ -p scheduled-stop-425603 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ stop    │ -p scheduled-stop-425603 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --cancel-scheduled                                                                      │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:04 UTC │ 27 Dec 25 10:04 UTC │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ stop    │ -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ delete  │ -p scheduled-stop-425603                                                                                         │ scheduled-stop-425603       │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ start   │ -p insufficient-storage-217644 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-217644 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ delete  │ -p insufficient-storage-217644                                                                                   │ insufficient-storage-217644 │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ start   │ -p pause-708160 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-708160                │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p missing-upgrade-651060 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-651060      │ jenkins │ v1.35.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p pause-708160 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-708160                │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p missing-upgrade-651060 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-651060      │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	│ pause   │ -p pause-708160 --alsologtostderr -v=5                                                                           │ pause-708160                │ jenkins │ v1.37.0 │ 27 Dec 25 10:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:07:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:07:10.498145  425825 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:07:10.498368  425825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:10.498400  425825 out.go:374] Setting ErrFile to fd 2...
	I1227 10:07:10.498426  425825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:07:10.498783  425825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:07:10.499251  425825 out.go:368] Setting JSON to false
	I1227 10:07:10.500301  425825 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6584,"bootTime":1766823447,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:07:10.500410  425825 start.go:143] virtualization:  
	I1227 10:07:10.512208  425825 out.go:179] * [missing-upgrade-651060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:07:10.515342  425825 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:07:10.515425  425825 notify.go:221] Checking for updates...
	I1227 10:07:10.519059  425825 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:07:10.522424  425825 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:07:10.525352  425825 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:07:10.528338  425825 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:07:10.531377  425825 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:07:10.535091  425825 config.go:182] Loaded profile config "missing-upgrade-651060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 10:07:10.538718  425825 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 10:07:10.541771  425825 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:07:10.636155  425825 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:07:10.636279  425825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:10.750680  425825 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:07:10.736903935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:10.750806  425825 docker.go:319] overlay module found
	I1227 10:07:10.753970  425825 out.go:179] * Using the docker driver based on existing profile
	I1227 10:07:10.756724  425825 start.go:309] selected driver: docker
	I1227 10:07:10.756742  425825 start.go:928] validating driver "docker" against &{Name:missing-upgrade-651060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-651060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:10.756831  425825 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:07:10.757518  425825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:07:10.843166  425825 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:07:10.833310021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:07:10.843483  425825 cni.go:84] Creating CNI manager for ""
	I1227 10:07:10.843551  425825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:10.843598  425825 start.go:353] cluster config:
	{Name:missing-upgrade-651060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-651060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:10.846762  425825 out.go:179] * Starting "missing-upgrade-651060" primary control-plane node in "missing-upgrade-651060" cluster
	I1227 10:07:10.849660  425825 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:07:10.852625  425825 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:07:10.855481  425825 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 10:07:10.855522  425825 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:07:10.855533  425825 cache.go:65] Caching tarball of preloaded images
	I1227 10:07:10.855617  425825 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:07:10.855626  425825 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1227 10:07:10.855742  425825 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/missing-upgrade-651060/config.json ...
	I1227 10:07:10.855946  425825 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1227 10:07:10.883482  425825 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1227 10:07:10.883501  425825 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1227 10:07:10.883516  425825 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:07:10.883545  425825 start.go:360] acquireMachinesLock for missing-upgrade-651060: {Name:mkf297c1d32a94879e675043ae17ea8cf87ee97f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:07:10.883601  425825 start.go:364] duration metric: took 35.257µs to acquireMachinesLock for "missing-upgrade-651060"
	I1227 10:07:10.883620  425825 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:07:10.883626  425825 fix.go:54] fixHost starting: 
	I1227 10:07:10.883895  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:10.907371  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:10.907436  425825 fix.go:112] recreateIfNeeded on missing-upgrade-651060: state= err=unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:10.907457  425825 fix.go:117] machineExists: false. err=machine does not exist
	I1227 10:07:10.910797  425825 out.go:179] * docker "missing-upgrade-651060" container is missing, will recreate.
	I1227 10:07:09.095579  425058 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:07:09.095603  425058 machine.go:97] duration metric: took 6.789244034s to provisionDockerMachine
	I1227 10:07:09.095616  425058 start.go:293] postStartSetup for "pause-708160" (driver="docker")
	I1227 10:07:09.095628  425058 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:07:09.095689  425058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:07:09.095740  425058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-708160
	I1227 10:07:09.130342  425058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/pause-708160/id_rsa Username:docker}
	I1227 10:07:09.242411  425058 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:07:09.247366  425058 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:07:09.247395  425058 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:07:09.247406  425058 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:07:09.247461  425058 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:07:09.247538  425058 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:07:09.247644  425058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:07:09.257235  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:07:09.282521  425058 start.go:296] duration metric: took 186.889557ms for postStartSetup
	I1227 10:07:09.282695  425058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:07:09.282848  425058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-708160
	I1227 10:07:09.308981  425058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/pause-708160/id_rsa Username:docker}
	I1227 10:07:09.411593  425058 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:07:09.418976  425058 fix.go:56] duration metric: took 7.15992937s for fixHost
	I1227 10:07:09.419007  425058 start.go:83] releasing machines lock for "pause-708160", held for 7.159982729s
	I1227 10:07:09.419093  425058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-708160
	I1227 10:07:09.461090  425058 ssh_runner.go:195] Run: cat /version.json
	I1227 10:07:09.461162  425058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-708160
	I1227 10:07:09.461507  425058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:07:09.461568  425058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-708160
	I1227 10:07:09.514376  425058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/pause-708160/id_rsa Username:docker}
	I1227 10:07:09.526247  425058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33323 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/pause-708160/id_rsa Username:docker}
	I1227 10:07:09.676021  425058 ssh_runner.go:195] Run: systemctl --version
	I1227 10:07:09.797688  425058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:07:09.891197  425058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:07:09.897735  425058 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:07:09.897805  425058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:07:09.915544  425058 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:07:09.915565  425058 start.go:496] detecting cgroup driver to use...
	I1227 10:07:09.915605  425058 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:07:09.915653  425058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:07:09.941582  425058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:07:09.962306  425058 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:07:09.962388  425058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:07:09.983015  425058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:07:10.003443  425058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:07:10.181196  425058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:07:10.364987  425058 docker.go:234] disabling docker service ...
	I1227 10:07:10.365124  425058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:07:10.382786  425058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:07:10.397564  425058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:07:10.627640  425058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:07:10.829322  425058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:07:10.849134  425058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:07:10.865944  425058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:07:10.866076  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.875900  425058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:07:10.876099  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.889536  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.899091  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.908867  425058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:07:10.921037  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.932395  425058 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.942887  425058 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:07:10.954829  425058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:07:10.963446  425058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:07:10.972739  425058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:11.116744  425058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:07:11.344017  425058 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:07:11.344097  425058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:07:11.348466  425058 start.go:574] Will wait 60s for crictl version
	I1227 10:07:11.348531  425058 ssh_runner.go:195] Run: which crictl
	I1227 10:07:11.352495  425058 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:07:11.377896  425058 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:07:11.378001  425058 ssh_runner.go:195] Run: crio --version
	I1227 10:07:11.408770  425058 ssh_runner.go:195] Run: crio --version
	I1227 10:07:11.440997  425058 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:07:11.443893  425058 cli_runner.go:164] Run: docker network inspect pause-708160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:11.460461  425058 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:07:11.464759  425058 kubeadm.go:884] updating cluster {Name:pause-708160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-708160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:07:11.464910  425058 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:07:11.464970  425058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:11.502789  425058 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:11.502814  425058 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:07:11.502870  425058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:07:11.528757  425058 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:07:11.528781  425058 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:07:11.528789  425058 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:07:11.528896  425058 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-708160 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-708160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:07:11.528981  425058 ssh_runner.go:195] Run: crio config
	I1227 10:07:11.600646  425058 cni.go:84] Creating CNI manager for ""
	I1227 10:07:11.600676  425058 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:07:11.600718  425058 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:07:11.600758  425058 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-708160 NodeName:pause-708160 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:07:11.600896  425058 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-708160"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:07:11.600972  425058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:07:11.610116  425058 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:07:11.610183  425058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:07:11.617747  425058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1227 10:07:11.630709  425058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:07:11.646516  425058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1227 10:07:11.683670  425058 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:07:11.690542  425058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:11.936675  425058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:11.966074  425058 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160 for IP: 192.168.76.2
	I1227 10:07:11.966101  425058 certs.go:195] generating shared ca certs ...
	I1227 10:07:11.966118  425058 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:11.966338  425058 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:07:11.966408  425058 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:07:11.966423  425058 certs.go:257] generating profile certs ...
	I1227 10:07:11.966571  425058 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/client.key
	I1227 10:07:11.966695  425058 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/apiserver.key.01d2a6ce
	I1227 10:07:11.966781  425058 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/proxy-client.key
	I1227 10:07:11.966971  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:07:11.967028  425058 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:07:11.967043  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:07:11.967104  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:07:11.967157  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:07:11.967217  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:07:11.967291  425058 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:07:11.968091  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:07:12.007082  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:07:12.057326  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:07:12.084394  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:07:12.116508  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1227 10:07:12.159395  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:07:12.195479  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:07:12.226967  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:07:12.269075  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:07:12.301333  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:07:12.333834  425058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:07:12.362748  425058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:07:12.386500  425058 ssh_runner.go:195] Run: openssl version
	I1227 10:07:12.395687  425058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:07:12.404952  425058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:07:12.417773  425058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:07:12.422596  425058 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:07:12.422708  425058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:07:12.470003  425058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:07:12.481108  425058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:07:12.489442  425058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:07:12.499155  425058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:07:12.503482  425058 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:07:12.503602  425058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:07:12.547868  425058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:07:12.557909  425058 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:12.566021  425058 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:07:12.574628  425058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:12.580929  425058 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:12.581044  425058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:07:12.628172  425058 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:07:12.636569  425058 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:07:12.642122  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:07:12.688342  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:07:12.737683  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:07:12.815459  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:07:12.890892  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:07:12.948859  425058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:07:13.005100  425058 kubeadm.go:401] StartCluster: {Name:pause-708160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-708160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:07:13.005240  425058 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:07:13.005313  425058 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:07:13.049717  425058 cri.go:96] found id: "d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc"
	I1227 10:07:13.049737  425058 cri.go:96] found id: "10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9"
	I1227 10:07:13.049741  425058 cri.go:96] found id: "a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c"
	I1227 10:07:13.049745  425058 cri.go:96] found id: "2c64d8e381df69d1958523f5822496beb2ba43eae987d8fae9b64ce57573225f"
	I1227 10:07:13.049748  425058 cri.go:96] found id: "8accf0af299bbeb42eaba7c41dfc1e952332c74448d6dab63424c3da6b9345c1"
	I1227 10:07:13.049752  425058 cri.go:96] found id: "83b21918cd3f4a0784408c901996870c341dd01f50c25e2bfd73792516ccd48b"
	I1227 10:07:13.049755  425058 cri.go:96] found id: "c1a19553104260faaaa5aa331a7ff93ae7f15092486abe8d8ca3f4b56ad77590"
	I1227 10:07:13.049758  425058 cri.go:96] found id: "3079e2ab8d34d01f38d7ae4115c0bb716f4d774566cb6851ad4a865b5d8c3196"
	I1227 10:07:13.049761  425058 cri.go:96] found id: "c7965a3e48bb8bf18d75a8539e56bfc922406ccd352f4b28d84d0a546f4e6c36"
	I1227 10:07:13.049769  425058 cri.go:96] found id: "28bffddc84703204ed115c0dffebcc0bca180c4c588838da7fc20688bc1238ff"
	I1227 10:07:13.049772  425058 cri.go:96] found id: "4a50eddc661a4f82f4965c2ad250ef56be38dd8cb5ec0a70c61f8a632169fcb8"
	I1227 10:07:13.049775  425058 cri.go:96] found id: "d0b0eb38c19eab92cacf23e4694451181e8c28243a242a10d326b5b858be4470"
	I1227 10:07:13.049788  425058 cri.go:96] found id: "4ce51511884834f1f4f55745c7878be41ec49c18945051c3c04b7b118b00b869"
	I1227 10:07:13.049791  425058 cri.go:96] found id: "c85dfce27860abf2d02145e43c6d41b03c70e5c2b6e5bb8cf32868ab5d5a377f"
	I1227 10:07:13.049794  425058 cri.go:96] found id: ""
	I1227 10:07:13.049852  425058 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:07:13.074393  425058 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:07:13Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:07:13.074478  425058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:07:13.087565  425058 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:07:13.087582  425058 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:07:13.087636  425058 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:07:13.100369  425058 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:07:13.101185  425058 kubeconfig.go:125] found "pause-708160" server: "https://192.168.76.2:8443"
	I1227 10:07:13.102164  425058 kapi.go:59] client config for pause-708160: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 10:07:13.102873  425058 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1227 10:07:13.103004  425058 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1227 10:07:13.103034  425058 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1227 10:07:13.103055  425058 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1227 10:07:13.103086  425058 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1227 10:07:13.103115  425058 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1227 10:07:13.103474  425058 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:07:13.117269  425058 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 10:07:13.117301  425058 kubeadm.go:602] duration metric: took 29.712459ms to restartPrimaryControlPlane
	I1227 10:07:13.117310  425058 kubeadm.go:403] duration metric: took 112.223523ms to StartCluster
	I1227 10:07:13.117326  425058 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:13.117395  425058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:07:13.118294  425058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:07:13.118516  425058 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:07:13.118985  425058 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:07:13.119203  425058 config.go:182] Loaded profile config "pause-708160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:07:13.122286  425058 out.go:179] * Enabled addons: 
	I1227 10:07:13.122406  425058 out.go:179] * Verifying Kubernetes components...
	I1227 10:07:10.913675  425825 delete.go:124] DEMOLISHING missing-upgrade-651060 ...
	I1227 10:07:10.913779  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:10.929561  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	W1227 10:07:10.929620  425825 stop.go:83] unable to get state: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:10.929638  425825 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:10.930088  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:10.949733  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:10.949808  425825 delete.go:82] Unable to get host status for missing-upgrade-651060, assuming it has already been deleted: state: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:10.949876  425825 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-651060
	W1227 10:07:10.974774  425825 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-651060 returned with exit code 1
	I1227 10:07:10.974805  425825 kic.go:371] could not find the container missing-upgrade-651060 to remove it. will try anyways
	I1227 10:07:10.974856  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:10.993735  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	W1227 10:07:10.993807  425825 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:10.993871  425825 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-651060 /bin/bash -c "sudo init 0"
	W1227 10:07:11.037628  425825 cli_runner.go:211] docker exec --privileged -t missing-upgrade-651060 /bin/bash -c "sudo init 0" returned with exit code 1
	I1227 10:07:11.037677  425825 oci.go:659] error shutdown missing-upgrade-651060: docker exec --privileged -t missing-upgrade-651060 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:12.037831  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:12.059392  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:12.059450  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:12.059468  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:12.059515  425825 retry.go:84] will retry after 700ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:12.759410  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:12.790089  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:12.790146  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:12.790156  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:13.555154  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:13.591247  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:13.591352  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:13.591372  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:15.019623  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:15.051920  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:15.052006  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:15.052024  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:13.125252  425058 addons.go:530] duration metric: took 6.26006ms for enable addons: enabled=[]
	I1227 10:07:13.125377  425058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:07:13.354013  425058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:07:13.371680  425058 node_ready.go:35] waiting up to 6m0s for node "pause-708160" to be "Ready" ...
	I1227 10:07:15.369575  425058 node_ready.go:49] node "pause-708160" is "Ready"
	I1227 10:07:15.369602  425058 node_ready.go:38] duration metric: took 1.997845068s for node "pause-708160" to be "Ready" ...
	I1227 10:07:15.369615  425058 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:07:15.369673  425058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:07:15.386014  425058 api_server.go:72] duration metric: took 2.267467461s to wait for apiserver process to appear ...
	I1227 10:07:15.386037  425058 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:07:15.386056  425058 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:07:15.426393  425058 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1227 10:07:15.426467  425058 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1227 10:07:15.887117  425058 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:07:15.896358  425058 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 10:07:15.896434  425058 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 10:07:16.386165  425058 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:07:16.395160  425058 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:07:16.396442  425058 api_server.go:141] control plane version: v1.35.0
	I1227 10:07:16.396477  425058 api_server.go:131] duration metric: took 1.010433256s to wait for apiserver health ...
	I1227 10:07:16.396487  425058 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:07:16.400255  425058 system_pods.go:59] 7 kube-system pods found
	I1227 10:07:16.400310  425058 system_pods.go:61] "coredns-7d764666f9-4m4gm" [70dcfb3e-0b6d-48dd-a817-849c6ffbda06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:07:16.400325  425058 system_pods.go:61] "etcd-pause-708160" [c207facb-d7d8-44f5-9551-b50f1312d45f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:07:16.400338  425058 system_pods.go:61] "kindnet-h9hk6" [21ce871d-d4c7-4ac3-8459-4154f198693b] Running
	I1227 10:07:16.400345  425058 system_pods.go:61] "kube-apiserver-pause-708160" [057197e0-abe1-41b2-a36d-1a48cb3c7f82] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:07:16.400366  425058 system_pods.go:61] "kube-controller-manager-pause-708160" [b62c56f5-dba8-4f4a-a42a-c4a4e24b0683] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:07:16.400372  425058 system_pods.go:61] "kube-proxy-2mnpk" [9865d55d-22e8-4301-9b7a-497ee437a59a] Running
	I1227 10:07:16.400380  425058 system_pods.go:61] "kube-scheduler-pause-708160" [6d4826fa-7096-43ea-907d-57c97b93d482] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:07:16.400396  425058 system_pods.go:74] duration metric: took 3.899591ms to wait for pod list to return data ...
	I1227 10:07:16.400411  425058 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:07:16.403223  425058 default_sa.go:45] found service account: "default"
	I1227 10:07:16.403252  425058 default_sa.go:55] duration metric: took 2.834237ms for default service account to be created ...
	I1227 10:07:16.403264  425058 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:07:16.406524  425058 system_pods.go:86] 7 kube-system pods found
	I1227 10:07:16.406562  425058 system_pods.go:89] "coredns-7d764666f9-4m4gm" [70dcfb3e-0b6d-48dd-a817-849c6ffbda06] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:07:16.406584  425058 system_pods.go:89] "etcd-pause-708160" [c207facb-d7d8-44f5-9551-b50f1312d45f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:07:16.406591  425058 system_pods.go:89] "kindnet-h9hk6" [21ce871d-d4c7-4ac3-8459-4154f198693b] Running
	I1227 10:07:16.406602  425058 system_pods.go:89] "kube-apiserver-pause-708160" [057197e0-abe1-41b2-a36d-1a48cb3c7f82] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:07:16.406613  425058 system_pods.go:89] "kube-controller-manager-pause-708160" [b62c56f5-dba8-4f4a-a42a-c4a4e24b0683] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:07:16.406618  425058 system_pods.go:89] "kube-proxy-2mnpk" [9865d55d-22e8-4301-9b7a-497ee437a59a] Running
	I1227 10:07:16.406625  425058 system_pods.go:89] "kube-scheduler-pause-708160" [6d4826fa-7096-43ea-907d-57c97b93d482] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:07:16.406639  425058 system_pods.go:126] duration metric: took 3.36943ms to wait for k8s-apps to be running ...
	I1227 10:07:16.406647  425058 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:07:16.406707  425058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:07:16.420435  425058 system_svc.go:56] duration metric: took 13.77742ms WaitForService to wait for kubelet
	I1227 10:07:16.420469  425058 kubeadm.go:587] duration metric: took 3.301926955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:07:16.420488  425058 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:07:16.424468  425058 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:07:16.424502  425058 node_conditions.go:123] node cpu capacity is 2
	I1227 10:07:16.424516  425058 node_conditions.go:105] duration metric: took 4.022497ms to run NodePressure ...
	I1227 10:07:16.424530  425058 start.go:242] waiting for startup goroutines ...
	I1227 10:07:16.424537  425058 start.go:247] waiting for cluster config update ...
	I1227 10:07:16.424546  425058 start.go:256] writing updated cluster config ...
	I1227 10:07:16.424857  425058 ssh_runner.go:195] Run: rm -f paused
	I1227 10:07:16.428622  425058 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:07:16.429289  425058 kapi.go:59] client config for pause-708160: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/client.crt", KeyFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/profiles/pause-708160/client.key", CAFile:"/home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f30470), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1227 10:07:16.432650  425058 pod_ready.go:83] waiting for pod "coredns-7d764666f9-4m4gm" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:16.106830  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:16.128863  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:16.130432  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:16.130459  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:19.376122  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:19.391330  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:19.391403  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:19.391418  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:19.391452  425825 retry.go:84] will retry after 2.3s: couldn't verify container is exited. %v: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	W1227 10:07:18.441395  425058 pod_ready.go:104] pod "coredns-7d764666f9-4m4gm" is not "Ready", error: <nil>
	W1227 10:07:20.939578  425058 pod_ready.go:104] pod "coredns-7d764666f9-4m4gm" is not "Ready", error: <nil>
	I1227 10:07:21.693534  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:21.709646  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:21.709718  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:21.709731  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:25.390461  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:25.408840  425825 cli_runner.go:211] docker container inspect missing-upgrade-651060 --format={{.State.Status}} returned with exit code 1
	I1227 10:07:25.408938  425825 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	I1227 10:07:25.408962  425825 oci.go:673] temporary error: container missing-upgrade-651060 status is  but expect it to be exited
	I1227 10:07:25.409019  425825 oci.go:88] couldn't shut down missing-upgrade-651060 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-651060": docker container inspect missing-upgrade-651060 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-651060
	 
	I1227 10:07:25.409113  425825 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-651060
	I1227 10:07:25.425026  425825 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-651060
	W1227 10:07:25.448108  425825 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-651060 returned with exit code 1
	I1227 10:07:25.448208  425825 cli_runner.go:164] Run: docker network inspect missing-upgrade-651060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:25.471378  425825 cli_runner.go:164] Run: docker network rm missing-upgrade-651060
	I1227 10:07:25.573252  425825 fix.go:124] Sleeping 1 second for extra luck!
	I1227 10:07:26.573413  425825 start.go:125] createHost starting for "" (driver="docker")
	W1227 10:07:22.939625  425058 pod_ready.go:104] pod "coredns-7d764666f9-4m4gm" is not "Ready", error: <nil>
	I1227 10:07:23.938048  425058 pod_ready.go:94] pod "coredns-7d764666f9-4m4gm" is "Ready"
	I1227 10:07:23.938081  425058 pod_ready.go:86] duration metric: took 7.50540505s for pod "coredns-7d764666f9-4m4gm" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:23.941257  425058 pod_ready.go:83] waiting for pod "etcd-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:23.946029  425058 pod_ready.go:94] pod "etcd-pause-708160" is "Ready"
	I1227 10:07:23.946055  425058 pod_ready.go:86] duration metric: took 4.770088ms for pod "etcd-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:23.948314  425058 pod_ready.go:83] waiting for pod "kube-apiserver-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:07:25.953647  425058 pod_ready.go:104] pod "kube-apiserver-pause-708160" is not "Ready", error: <nil>
	I1227 10:07:26.955161  425058 pod_ready.go:94] pod "kube-apiserver-pause-708160" is "Ready"
	I1227 10:07:26.955186  425058 pod_ready.go:86] duration metric: took 3.006847816s for pod "kube-apiserver-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:26.958456  425058 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:28.963685  425058 pod_ready.go:94] pod "kube-controller-manager-pause-708160" is "Ready"
	I1227 10:07:28.963712  425058 pod_ready.go:86] duration metric: took 2.005232944s for pod "kube-controller-manager-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:28.965931  425058 pod_ready.go:83] waiting for pod "kube-proxy-2mnpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:28.970080  425058 pod_ready.go:94] pod "kube-proxy-2mnpk" is "Ready"
	I1227 10:07:28.970109  425058 pod_ready.go:86] duration metric: took 4.147421ms for pod "kube-proxy-2mnpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:28.972244  425058 pod_ready.go:83] waiting for pod "kube-scheduler-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:29.337939  425058 pod_ready.go:94] pod "kube-scheduler-pause-708160" is "Ready"
	I1227 10:07:29.337963  425058 pod_ready.go:86] duration metric: took 365.691465ms for pod "kube-scheduler-pause-708160" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:07:29.337976  425058 pod_ready.go:40] duration metric: took 12.909319139s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:07:29.418526  425058 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:07:29.434430  425058 out.go:203] 
	W1227 10:07:29.444778  425058 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:07:29.453083  425058 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:07:29.461333  425058 out.go:179] * Done! kubectl is now configured to use "pause-708160" cluster and "default" namespace by default
	I1227 10:07:26.576510  425825 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:07:26.576636  425825 start.go:159] libmachine.API.Create for "missing-upgrade-651060" (driver="docker")
	I1227 10:07:26.576676  425825 client.go:173] LocalClient.Create starting
	I1227 10:07:26.576750  425825 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:07:26.576798  425825 main.go:144] libmachine: Decoding PEM data...
	I1227 10:07:26.576819  425825 main.go:144] libmachine: Parsing certificate...
	I1227 10:07:26.576875  425825 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:07:26.576898  425825 main.go:144] libmachine: Decoding PEM data...
	I1227 10:07:26.576915  425825 main.go:144] libmachine: Parsing certificate...
	I1227 10:07:26.577175  425825 cli_runner.go:164] Run: docker network inspect missing-upgrade-651060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:07:26.592785  425825 cli_runner.go:211] docker network inspect missing-upgrade-651060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:07:26.592878  425825 network_create.go:284] running [docker network inspect missing-upgrade-651060] to gather additional debugging logs...
	I1227 10:07:26.592901  425825 cli_runner.go:164] Run: docker network inspect missing-upgrade-651060
	W1227 10:07:26.608047  425825 cli_runner.go:211] docker network inspect missing-upgrade-651060 returned with exit code 1
	I1227 10:07:26.608079  425825 network_create.go:287] error running [docker network inspect missing-upgrade-651060]: docker network inspect missing-upgrade-651060: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-651060 not found
	I1227 10:07:26.608091  425825 network_create.go:289] output of [docker network inspect missing-upgrade-651060]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-651060 not found
	
	** /stderr **
	I1227 10:07:26.608210  425825 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:07:26.624025  425825 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:07:26.624513  425825 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:07:26.624819  425825 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:07:26.625211  425825 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-84318959cf08 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:f4:2d:46:56:6a} reservation:<nil>}
	I1227 10:07:26.625690  425825 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bfed90}
	I1227 10:07:26.625714  425825 network_create.go:124] attempt to create docker network missing-upgrade-651060 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 10:07:26.625777  425825 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-651060 missing-upgrade-651060
	I1227 10:07:26.691032  425825 network_create.go:108] docker network missing-upgrade-651060 192.168.85.0/24 created
	I1227 10:07:26.691079  425825 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-651060" container
	I1227 10:07:26.691193  425825 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:07:26.706861  425825 cli_runner.go:164] Run: docker volume create missing-upgrade-651060 --label name.minikube.sigs.k8s.io=missing-upgrade-651060 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:07:26.722348  425825 oci.go:103] Successfully created a docker volume missing-upgrade-651060
	I1227 10:07:26.722452  425825 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-651060-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-651060 --entrypoint /usr/bin/test -v missing-upgrade-651060:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1227 10:07:27.134452  425825 oci.go:107] Successfully prepared a docker volume missing-upgrade-651060
	I1227 10:07:27.134515  425825 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1227 10:07:27.134526  425825 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:07:27.134592  425825 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-651060:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:07:31.720054  425825 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-651060:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.58542017s)
	I1227 10:07:31.720088  425825 kic.go:203] duration metric: took 4.585558616s to extract preloaded images to volume ...
	W1227 10:07:31.720238  425825 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:07:31.720352  425825 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:07:31.774791  425825 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-651060 --name missing-upgrade-651060 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-651060 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-651060 --network missing-upgrade-651060 --ip 192.168.85.2 --volume missing-upgrade-651060:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1227 10:07:32.151009  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Running}}
	I1227 10:07:32.174177  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	I1227 10:07:32.211531  425825 cli_runner.go:164] Run: docker exec missing-upgrade-651060 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:07:32.279790  425825 oci.go:144] the created container "missing-upgrade-651060" has a running status.
	I1227 10:07:32.280299  425825 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/missing-upgrade-651060/id_rsa...
	I1227 10:07:33.333490  425825 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/missing-upgrade-651060/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:07:33.369638  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	I1227 10:07:33.424559  425825 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:07:33.424578  425825 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-651060 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:07:33.512221  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	I1227 10:07:33.534088  425825 machine.go:94] provisionDockerMachine start ...
	I1227 10:07:33.534174  425825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-651060
	I1227 10:07:33.563061  425825 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:33.563405  425825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1227 10:07:33.563414  425825 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:07:33.705251  425825 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-651060
	
	I1227 10:07:33.705280  425825 ubuntu.go:182] provisioning hostname "missing-upgrade-651060"
	I1227 10:07:33.705359  425825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-651060
	I1227 10:07:33.737912  425825 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:33.738331  425825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1227 10:07:33.738349  425825 main.go:144] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-651060 && echo "missing-upgrade-651060" | sudo tee /etc/hostname
	I1227 10:07:33.921286  425825 main.go:144] libmachine: SSH cmd err, output: <nil>: missing-upgrade-651060
	
	I1227 10:07:33.921369  425825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-651060
	I1227 10:07:33.943376  425825 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:33.943674  425825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1227 10:07:33.943692  425825 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-651060' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-651060/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-651060' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:07:34.088499  425825 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:07:34.088556  425825 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:07:34.088604  425825 ubuntu.go:190] setting up certificates
	I1227 10:07:34.088632  425825 provision.go:84] configureAuth start
	I1227 10:07:34.088717  425825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-651060
	I1227 10:07:34.110420  425825 provision.go:143] copyHostCerts
	I1227 10:07:34.110482  425825 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:07:34.110491  425825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:07:34.110581  425825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:07:34.110675  425825 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:07:34.110680  425825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:07:34.110705  425825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:07:34.110761  425825 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:07:34.110765  425825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:07:34.110787  425825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:07:34.110860  425825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-651060 san=[127.0.0.1 192.168.85.2 localhost minikube missing-upgrade-651060]
	I1227 10:07:34.360233  425825 provision.go:177] copyRemoteCerts
	I1227 10:07:34.360344  425825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:07:34.360414  425825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-651060
	I1227 10:07:34.383483  425825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/missing-upgrade-651060/id_rsa Username:docker}
	I1227 10:07:34.478285  425825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:07:34.509045  425825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1227 10:07:34.538719  425825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:07:34.570381  425825 provision.go:87] duration metric: took 481.708449ms to configureAuth
	I1227 10:07:34.570414  425825 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:07:34.570605  425825 config.go:182] Loaded profile config "missing-upgrade-651060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1227 10:07:34.570703  425825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-651060
	I1227 10:07:34.597753  425825 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:34.598071  425825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1227 10:07:34.598086  425825 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:07:34.941104  425825 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:07:34.941134  425825 machine.go:97] duration metric: took 1.407022318s to provisionDockerMachine
	I1227 10:07:34.941145  425825 client.go:176] duration metric: took 8.364462596s to LocalClient.Create
	I1227 10:07:34.941159  425825 start.go:167] duration metric: took 8.364524726s to libmachine.API.Create "missing-upgrade-651060"
	I1227 10:07:34.941166  425825 start.go:293] postStartSetup for "missing-upgrade-651060" (driver="docker")
	I1227 10:07:34.941176  425825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:07:34.941251  425825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:07:34.941294  425825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-651060
	I1227 10:07:34.973415  425825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/missing-upgrade-651060/id_rsa Username:docker}
	I1227 10:07:35.078448  425825 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:07:35.083925  425825 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:07:35.084017  425825 main.go:144] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1227 10:07:35.084030  425825 main.go:144] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1227 10:07:35.084037  425825 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1227 10:07:35.084048  425825 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:07:35.084101  425825 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:07:35.084182  425825 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:07:35.084295  425825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:07:35.097563  425825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:07:35.131294  425825 start.go:296] duration metric: took 190.113105ms for postStartSetup
	I1227 10:07:35.131642  425825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-651060
	I1227 10:07:35.209783  425825 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/missing-upgrade-651060/config.json ...
	I1227 10:07:35.210096  425825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:07:35.210146  425825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-651060
	I1227 10:07:35.262291  425825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/missing-upgrade-651060/id_rsa Username:docker}
	I1227 10:07:35.365088  425825 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:07:35.371671  425825 start.go:128] duration metric: took 8.798220435s to createHost
	I1227 10:07:35.371762  425825 cli_runner.go:164] Run: docker container inspect missing-upgrade-651060 --format={{.State.Status}}
	W1227 10:07:35.396645  425825 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:07:35.396671  425825 machine.go:94] provisionDockerMachine start ...
	I1227 10:07:35.396742  425825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-651060
	I1227 10:07:35.414570  425825 main.go:144] libmachine: Using SSH client type: native
	I1227 10:07:35.414895  425825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1227 10:07:35.414917  425825 main.go:144] libmachine: About to run SSH command:
	hostname
	
	
	==> CRI-O <==
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.865741298Z" level=info msg="Started container" PID=2250 containerID=2c64d8e381df69d1958523f5822496beb2ba43eae987d8fae9b64ce57573225f description=kube-system/kube-scheduler-pause-708160/kube-scheduler id=5a583942-548c-4b8b-b7a3-54743f4d4973 name=/runtime.v1.RuntimeService/StartContainer sandboxID=50c3315b3c6ed8363a3bd429dfc2c87a9626c1dd558b8b152357d91cbac700f5
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.907544255Z" level=info msg="Created container 10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9: kube-system/etcd-pause-708160/etcd" id=25707883-d0ba-4be1-80e6-32647c9a70a7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.912722141Z" level=info msg="Starting container: 10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9" id=86ca3d00-1a8c-41bf-b763-616728f7d15a name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.916631004Z" level=info msg="Started container" PID=2269 containerID=10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9 description=kube-system/etcd-pause-708160/etcd id=86ca3d00-1a8c-41bf-b763-616728f7d15a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d9c35b9f910442a460fcf9bc21007ca29dfe8f8e687792952fd2aa1e13b1415c
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.920268349Z" level=info msg="Created container d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc: kube-system/kube-apiserver-pause-708160/kube-apiserver" id=1039f901-0295-40ac-b614-7b46efbd7372 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.921207327Z" level=info msg="Starting container: d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc" id=a1988509-a6c0-46ba-84aa-657d4683efac name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.955355603Z" level=info msg="Started container" PID=2290 containerID=d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc description=kube-system/kube-apiserver-pause-708160/kube-apiserver id=a1988509-a6c0-46ba-84aa-657d4683efac name=/runtime.v1.RuntimeService/StartContainer sandboxID=854a133e8f3029a9e922318723ade9273d6b3951243f56c1d9aa10ec4770af1a
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.987288461Z" level=info msg="Created container a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c: kube-system/kube-controller-manager-pause-708160/kube-controller-manager" id=463d4502-4b4f-452e-b0dc-1870f06219fd name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.987931075Z" level=info msg="Starting container: a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c" id=eea01752-38f1-43ed-961c-c70fc502b963 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:07:11 pause-708160 crio[2096]: time="2025-12-27T10:07:11.990331963Z" level=info msg="Started container" PID=2278 containerID=a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c description=kube-system/kube-controller-manager-pause-708160/kube-controller-manager id=eea01752-38f1-43ed-961c-c70fc502b963 name=/runtime.v1.RuntimeService/StartContainer sandboxID=835d71bb40a29fa17d6528d2aeb01aa55148bafb27ec5b8dcd092b1359bfc4b8
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.091706011Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.095611978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.095848977Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.095886958Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.099223485Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.099260572Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.099286271Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.102723254Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.102760013Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.102784423Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.106105975Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.106145377Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.106169344Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.110189412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:07:22 pause-708160 crio[2096]: time="2025-12-27T10:07:22.110229101Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d2ec260d7f42c       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     25 seconds ago       Running             kube-apiserver            1                   854a133e8f302       kube-apiserver-pause-708160            kube-system
	10a5ff43b9ef4       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     25 seconds ago       Running             etcd                      1                   d9c35b9f91044       etcd-pause-708160                      kube-system
	a5fb483d50cc0       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     25 seconds ago       Running             kube-controller-manager   1                   835d71bb40a29       kube-controller-manager-pause-708160   kube-system
	2c64d8e381df6       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     25 seconds ago       Running             kube-scheduler            1                   50c3315b3c6ed       kube-scheduler-pause-708160            kube-system
	8accf0af299bb       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     25 seconds ago       Running             coredns                   1                   8d8e403ff3d6e       coredns-7d764666f9-4m4gm               kube-system
	83b21918cd3f4       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     25 seconds ago       Running             kube-proxy                1                   df253bbf36ccc       kube-proxy-2mnpk                       kube-system
	c1a1955310426       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     25 seconds ago       Running             kindnet-cni               1                   5f987979c72c2       kindnet-h9hk6                          kube-system
	3079e2ab8d34d       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     38 seconds ago       Exited              coredns                   0                   8d8e403ff3d6e       coredns-7d764666f9-4m4gm               kube-system
	c7965a3e48bb8       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   49 seconds ago       Exited              kindnet-cni               0                   5f987979c72c2       kindnet-h9hk6                          kube-system
	28bffddc84703       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     52 seconds ago       Exited              kube-proxy                0                   df253bbf36ccc       kube-proxy-2mnpk                       kube-system
	4a50eddc661a4       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     About a minute ago   Exited              kube-scheduler            0                   50c3315b3c6ed       kube-scheduler-pause-708160            kube-system
	d0b0eb38c19ea       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     About a minute ago   Exited              kube-apiserver            0                   854a133e8f302       kube-apiserver-pause-708160            kube-system
	4ce5151188483       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     About a minute ago   Exited              kube-controller-manager   0                   835d71bb40a29       kube-controller-manager-pause-708160   kube-system
	c85dfce27860a       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     About a minute ago   Exited              etcd                      0                   d9c35b9f91044       etcd-pause-708160                      kube-system
	
	
	==> coredns [3079e2ab8d34d01f38d7ae4115c0bb716f4d774566cb6851ad4a865b5d8c3196] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60027 - 3172 "HINFO IN 6383244010125079538.4169238436832107798. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019344903s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8accf0af299bbeb42eaba7c41dfc1e952332c74448d6dab63424c3da6b9345c1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43326 - 44979 "HINFO IN 589235669207316467.2357734862419678262. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02472733s
	
	
	==> describe nodes <==
	Name:               pause-708160
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-708160
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=pause-708160
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_06_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:06:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-708160
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:07:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:07:20 +0000   Sat, 27 Dec 2025 10:06:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:07:20 +0000   Sat, 27 Dec 2025 10:06:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:07:20 +0000   Sat, 27 Dec 2025 10:06:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:07:20 +0000   Sat, 27 Dec 2025 10:06:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-708160
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                0aed8aa6-6d51-4c4b-af7d-5533ba1bacb6
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-4m4gm                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     53s
	  kube-system                 etcd-pause-708160                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-h9hk6                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      53s
	  kube-system                 kube-apiserver-pause-708160             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-pause-708160    200m (10%)    0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-proxy-2mnpk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-pause-708160             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  54s   node-controller  Node pause-708160 event: Registered Node pause-708160 in Controller
	  Normal  RegisteredNode  19s   node-controller  Node pause-708160 event: Registered Node pause-708160 in Controller
	
	
	==> dmesg <==
	[Dec27 09:37] overlayfs: idmapped layers are currently not supported
	[Dec27 09:38] overlayfs: idmapped layers are currently not supported
	[Dec27 09:39] overlayfs: idmapped layers are currently not supported
	[Dec27 09:41] overlayfs: idmapped layers are currently not supported
	[Dec27 09:42] overlayfs: idmapped layers are currently not supported
	[Dec27 09:43] overlayfs: idmapped layers are currently not supported
	[  +3.379616] overlayfs: idmapped layers are currently not supported
	[ +26.881821] overlayfs: idmapped layers are currently not supported
	[Dec27 09:44] overlayfs: idmapped layers are currently not supported
	[Dec27 09:45] overlayfs: idmapped layers are currently not supported
	[  +3.382865] overlayfs: idmapped layers are currently not supported
	[Dec27 09:53] overlayfs: idmapped layers are currently not supported
	[Dec27 09:57] overlayfs: idmapped layers are currently not supported
	[Dec27 09:58] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [10a5ff43b9ef4299beafa73add168d5fcb3e2e48949e93f7cf96fc22b1979ff9] <==
	{"level":"info","ts":"2025-12-27T10:07:12.103659Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:07:12.103679Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:07:12.105622Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T10:07:12.105735Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:07:12.105809Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:07:12.106660Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:07:12.140040Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:07:12.553826Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:12.553931Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:12.553998Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:07:12.554039Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:07:12.554088Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:12.559373Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:12.559503Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:07:12.559548Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:12.559584Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:07:12.563321Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-708160 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:07:12.563499Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:07:12.563644Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:07:12.564540Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:07:12.588278Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:07:12.588357Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:07:12.596888Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:07:12.624426Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:07:12.625290Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [c85dfce27860abf2d02145e43c6d41b03c70e5c2b6e5bb8cf32868ab5d5a377f] <==
	{"level":"info","ts":"2025-12-27T10:06:32.561590Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T10:06:32.563616Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:06:32.563674Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:06:32.575461Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:06:32.591385Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:06:32.596288Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:06:32.600211Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:07:03.739603Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-27T10:07:03.739656Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-708160","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-27T10:07:03.739753Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T10:07:04.043645Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-27T10:07:04.043804Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T10:07:04.043885Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-27T10:07:04.044061Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-27T10:07:04.044116Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-27T10:07:04.044458Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-27T10:07:04.044530Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T10:07:04.044568Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-27T10:07:04.044405Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-27T10:07:04.044676Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-27T10:07:04.044718Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T10:07:04.047354Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-27T10:07:04.047493Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-27T10:07:04.047557Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:07:04.047587Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-708160","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 10:07:37 up  1:50,  0 user,  load average: 2.87, 2.32, 2.11
	Linux pause-708160 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c1a19553104260faaaa5aa331a7ff93ae7f15092486abe8d8ca3f4b56ad77590] <==
	I1227 10:07:11.832490       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:07:11.836178       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:07:11.836347       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:07:11.836362       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:07:11.836378       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:07:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:07:12.107229       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:07:12.107317       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:07:12.107361       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1227 10:07:12.116149       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:07:12.116704       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1227 10:07:12.117021       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:07:12.123497       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:07:12.140318       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 10:07:15.508649       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:07:15.508759       1 metrics.go:72] Registering metrics
	I1227 10:07:15.508838       1 controller.go:711] "Syncing nftables rules"
	I1227 10:07:22.091225       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:07:22.091403       1 main.go:301] handling current node
	I1227 10:07:32.092706       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:07:32.092791       1 main.go:301] handling current node
	
	
	==> kindnet [c7965a3e48bb8bf18d75a8539e56bfc922406ccd352f4b28d84d0a546f4e6c36] <==
	I1227 10:06:48.018797       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:06:48.019846       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:06:48.020084       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:06:48.020134       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:06:48.020178       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:06:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:06:48.223866       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:06:48.223991       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:06:48.224035       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:06:48.225902       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:06:48.525178       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:06:48.525303       1 metrics.go:72] Registering metrics
	I1227 10:06:48.525405       1 controller.go:711] "Syncing nftables rules"
	I1227 10:06:58.223415       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:06:58.223468       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d0b0eb38c19eab92cacf23e4694451181e8c28243a242a10d326b5b858be4470] <==
	W1227 10:07:03.810734       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810763       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810808       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810837       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810863       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810889       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810919       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810948       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.810974       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811002       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811031       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811057       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811103       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811134       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811162       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811189       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811220       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811248       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811276       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.811304       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.817369       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.817471       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.817560       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.817660       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1227 10:07:03.817756       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d2ec260d7f42cdefbbd059feb22e198ec2ab6caefddd971c3819733ab82f35dc] <==
	I1227 10:07:15.437651       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:07:15.450098       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 10:07:15.450136       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 10:07:15.450144       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:07:15.450234       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:07:15.450325       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:15.450345       1 policy_source.go:248] refreshing policies
	I1227 10:07:15.450489       1 aggregator.go:187] initial CRD sync complete...
	I1227 10:07:15.450505       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 10:07:15.450510       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:07:15.450515       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:07:15.450554       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:07:15.451130       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:15.451337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:07:15.458912       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:07:15.469117       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:07:15.524262       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:07:15.530270       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1227 10:07:15.537288       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:07:16.117483       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:07:17.298114       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:07:18.637769       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:07:18.840428       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:07:18.887399       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:07:18.989937       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [4ce51511884834f1f4f55745c7878be41ec49c18945051c3c04b7b118b00b869] <==
	I1227 10:06:43.137998       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138005       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138012       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138018       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138026       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138040       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138124       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138248       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138700       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.138776       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.131615       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.158528       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 10:06:43.158613       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-708160"
	I1227 10:06:43.158677       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 10:06:43.154597       1 range_allocator.go:433] "Set node PodCIDR" node="pause-708160" podCIDRs=["10.244.0.0/24"]
	I1227 10:06:43.131621       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.131627       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.131633       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.172633       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.174761       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:43.329650       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.338788       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:43.338821       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:06:43.338827       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:07:03.160993       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [a5fb483d50cc072e19bf11d7e62a4a2eee6a288bc08d3ff65595eaf36ac0721c] <==
	I1227 10:07:18.526216       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.540391       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.540390       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.540412       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.540423       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.542689       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.543878       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.543912       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:07:18.543931       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:07:18.543935       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:18.543939       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545179       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545741       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545803       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545737       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545766       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545783       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545789       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.545775       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.548859       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.551038       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.551075       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:07:18.551082       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:07:18.599258       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:18.644940       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [28bffddc84703204ed115c0dffebcc0bca180c4c588838da7fc20688bc1238ff] <==
	I1227 10:06:44.746649       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:06:44.897402       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:06:44.998280       1 shared_informer.go:377] "Caches are synced"
	I1227 10:06:44.998338       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:06:44.998415       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:06:45.211406       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:06:45.211472       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:06:45.220571       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:06:45.224481       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:06:45.224613       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:06:45.230037       1 config.go:200] "Starting service config controller"
	I1227 10:06:45.230134       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:06:45.230185       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:06:45.230216       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:06:45.230266       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:06:45.230299       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:06:45.240990       1 config.go:309] "Starting node config controller"
	I1227 10:06:45.241105       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:06:45.241144       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:06:45.331287       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:06:45.331304       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:06:45.331322       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [83b21918cd3f4a0784408c901996870c341dd01f50c25e2bfd73792516ccd48b] <==
	I1227 10:07:12.648220       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:07:13.736646       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:15.541291       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:15.541432       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:07:15.551828       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:07:15.664273       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:07:15.664408       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:07:15.673075       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:07:15.673495       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:07:15.673508       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:07:15.679498       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:07:15.679525       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:07:15.679819       1 config.go:200] "Starting service config controller"
	I1227 10:07:15.679839       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:07:15.680202       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:07:15.680219       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:07:15.680609       1 config.go:309] "Starting node config controller"
	I1227 10:07:15.680629       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:07:15.680636       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:07:15.780019       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:07:15.780155       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 10:07:15.783670       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2c64d8e381df69d1958523f5822496beb2ba43eae987d8fae9b64ce57573225f] <==
	I1227 10:07:15.365122       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:07:15.365157       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:07:15.367285       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:07:15.367409       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:07:15.367428       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:07:15.367443       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 10:07:15.433152       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:07:15.433289       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:07:15.433359       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:07:15.433428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:07:15.433492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:07:15.433547       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 10:07:15.433616       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:07:15.433688       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:07:15.433757       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:07:15.433823       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:07:15.433883       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:07:15.433934       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:07:15.434076       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:07:15.434175       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:07:15.434278       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:07:15.434506       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:07:15.434579       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:07:15.437218       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I1227 10:07:15.469451       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [4a50eddc661a4f82f4965c2ad250ef56be38dd8cb5ec0a70c61f8a632169fcb8] <==
	E1227 10:06:35.554191       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:06:36.379701       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:06:36.501521       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:06:36.501683       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:06:36.505311       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:06:36.583299       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:06:36.623597       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:06:36.657157       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:06:36.670550       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:06:36.692075       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:06:36.782285       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:06:36.784986       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:06:36.852471       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:06:36.907621       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:06:36.920927       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:06:36.977300       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:06:36.988672       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:06:37.008068       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	I1227 10:06:39.003457       1 shared_informer.go:377] "Caches are synced"
	I1227 10:07:03.742895       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1227 10:07:03.753182       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1227 10:07:03.753223       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1227 10:07:03.753253       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:07:03.753466       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1227 10:07:03.753493       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.176518    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-4m4gm\" is forbidden: User \"system:node:pause-708160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-708160' and this object" podUID="70dcfb3e-0b6d-48dd-a817-849c6ffbda06" pod="kube-system/coredns-7d764666f9-4m4gm"
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.259960    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-708160\" is forbidden: User \"system:node:pause-708160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-708160' and this object" podUID="8437bdea109a5d63c3096dca1ea29eca" pod="kube-system/kube-scheduler-pause-708160"
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.342586    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-708160\" is forbidden: User \"system:node:pause-708160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-708160' and this object" podUID="3978c97bc810411d8a12a7fb5530b6a6" pod="kube-system/kube-controller-manager-pause-708160"
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.412080    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-708160\" is forbidden: User \"system:node:pause-708160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-708160' and this object" podUID="93e8f5bd9542f5694bdd1ee4733121e1" pod="kube-system/kube-apiserver-pause-708160"
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.432520    1300 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-708160\" is forbidden: User \"system:node:pause-708160\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-708160' and this object" podUID="35a0d8cfc40a7b855a6e10d8690f470a" pod="kube-system/etcd-pause-708160"
	Dec 27 10:07:15 pause-708160 kubelet[1300]: E1227 10:07:15.436725    1300 status_manager.go:1045] "Failed to get status for pod" err=<
	Dec 27 10:07:15 pause-708160 kubelet[1300]:         pods "kindnet-h9hk6" is forbidden: User "system:node:pause-708160" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-708160' and this object
	Dec 27 10:07:15 pause-708160 kubelet[1300]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	Dec 27 10:07:15 pause-708160 kubelet[1300]:  > podUID="21ce871d-d4c7-4ac3-8459-4154f198693b" pod="kube-system/kindnet-h9hk6"
	Dec 27 10:07:16 pause-708160 kubelet[1300]: E1227 10:07:16.421058    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-708160" containerName="kube-apiserver"
	Dec 27 10:07:17 pause-708160 kubelet[1300]: E1227 10:07:17.777661    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-708160" containerName="kube-scheduler"
	Dec 27 10:07:18 pause-708160 kubelet[1300]: E1227 10:07:18.677797    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-708160" containerName="kube-controller-manager"
	Dec 27 10:07:19 pause-708160 kubelet[1300]: W1227 10:07:19.655056    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 27 10:07:22 pause-708160 kubelet[1300]: E1227 10:07:22.764957    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-708160" containerName="etcd"
	Dec 27 10:07:22 pause-708160 kubelet[1300]: E1227 10:07:22.870565    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-708160" containerName="etcd"
	Dec 27 10:07:23 pause-708160 kubelet[1300]: E1227 10:07:23.851608    1300 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-4m4gm" containerName="coredns"
	Dec 27 10:07:26 pause-708160 kubelet[1300]: E1227 10:07:26.430294    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-708160" containerName="kube-apiserver"
	Dec 27 10:07:26 pause-708160 kubelet[1300]: E1227 10:07:26.886822    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-708160" containerName="kube-apiserver"
	Dec 27 10:07:27 pause-708160 kubelet[1300]: E1227 10:07:27.787054    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-708160" containerName="kube-scheduler"
	Dec 27 10:07:27 pause-708160 kubelet[1300]: E1227 10:07:27.889353    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-708160" containerName="kube-scheduler"
	Dec 27 10:07:28 pause-708160 kubelet[1300]: E1227 10:07:28.686893    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-708160" containerName="kube-controller-manager"
	Dec 27 10:07:29 pause-708160 kubelet[1300]: W1227 10:07:29.662542    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 27 10:07:30 pause-708160 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:07:30 pause-708160 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:07:30 pause-708160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-708160 -n pause-708160
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-708160 -n pause-708160: exit status 2 (514.52328ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-708160 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (9.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.459707ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:24:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-482317 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-482317 describe deploy/metrics-server -n kube-system: exit status 1 (95.434003ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-482317 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-482317
helpers_test.go:244: (dbg) docker inspect old-k8s-version-482317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb",
	        "Created": "2025-12-27T10:23:35.1004286Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 486024,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:23:35.183685419Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/hosts",
	        "LogPath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb-json.log",
	        "Name": "/old-k8s-version-482317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-482317:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-482317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb",
	                "LowerDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-482317",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-482317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-482317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-482317",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-482317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1da8c4f1495e8867d4030eaedf965781cb658542924ca99b3f75a29cb3ca1091",
	            "SandboxKey": "/var/run/docker/netns/1da8c4f1495e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-482317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:de:14:f1:87:6b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76ec4721253c21d528fd72ab4bdb6e7b5be9293e371f48ba721c982435ec2193",
	                    "EndpointID": "2faa4122386ed97107f75f28ce658cc6cc5f9c5328fd56728f57de6411648eca",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-482317",
	                        "d3ed077d2566"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482317 -n old-k8s-version-482317
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-482317 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-482317 logs -n 25: (1.245369324s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-785247 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo containerd config dump                                                                                                                                                                                                  │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo crio config                                                                                                                                                                                                             │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ delete  │ -p cilium-785247                                                                                                                                                                                                                              │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:17 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ delete  │ -p cert-expiration-528820                                                                                                                                                                                                                     │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ start   │ -p force-systemd-flag-915850 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-915850 │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │                     │
	│ delete  │ -p force-systemd-env-193016                                                                                                                                                                                                                   │ force-systemd-env-193016  │ jenkins │ v1.37.0 │ 27 Dec 25 10:22 UTC │ 27 Dec 25 10:22 UTC │
	│ start   │ -p cert-options-810217 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ cert-options-810217 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ -p cert-options-810217 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ delete  │ -p cert-options-810217                                                                                                                                                                                                                        │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:23:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:23:29.475404  485582 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:23:29.475578  485582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:23:29.475611  485582 out.go:374] Setting ErrFile to fd 2...
	I1227 10:23:29.475633  485582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:23:29.476040  485582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:23:29.476576  485582 out.go:368] Setting JSON to false
	I1227 10:23:29.477479  485582 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7563,"bootTime":1766823447,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:23:29.477601  485582 start.go:143] virtualization:  
	I1227 10:23:29.481503  485582 out.go:179] * [old-k8s-version-482317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:23:29.484408  485582 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:23:29.484496  485582 notify.go:221] Checking for updates...
	I1227 10:23:29.491347  485582 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:23:29.494758  485582 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:23:29.497919  485582 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:23:29.501140  485582 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:23:29.504231  485582 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:23:29.508036  485582 config.go:182] Loaded profile config "force-systemd-flag-915850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:23:29.508148  485582 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:23:29.534927  485582 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:23:29.535043  485582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:23:29.599485  485582 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:23:29.590342941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:23:29.599591  485582 docker.go:319] overlay module found
	I1227 10:23:29.602757  485582 out.go:179] * Using the docker driver based on user configuration
	I1227 10:23:29.605875  485582 start.go:309] selected driver: docker
	I1227 10:23:29.605896  485582 start.go:928] validating driver "docker" against <nil>
	I1227 10:23:29.605911  485582 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:23:29.606635  485582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:23:29.663564  485582 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:23:29.654017255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:23:29.663733  485582 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:23:29.664020  485582 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:23:29.667097  485582 out.go:179] * Using Docker driver with root privileges
	I1227 10:23:29.670059  485582 cni.go:84] Creating CNI manager for ""
	I1227 10:23:29.670144  485582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:23:29.670158  485582 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:23:29.670253  485582 start.go:353] cluster config:
	{Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:23:29.675367  485582 out.go:179] * Starting "old-k8s-version-482317" primary control-plane node in "old-k8s-version-482317" cluster
	I1227 10:23:29.678220  485582 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:23:29.681237  485582 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:23:29.684136  485582 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:23:29.684188  485582 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:23:29.684200  485582 cache.go:65] Caching tarball of preloaded images
	I1227 10:23:29.684239  485582 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:23:29.684291  485582 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:23:29.684301  485582 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 10:23:29.684420  485582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/config.json ...
	I1227 10:23:29.684437  485582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/config.json: {Name:mkd26a59bc9a8b8816ee8e5c2b25f4c85d040c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:23:29.703393  485582 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:23:29.703423  485582 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:23:29.703444  485582 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:23:29.703475  485582 start.go:360] acquireMachinesLock for old-k8s-version-482317: {Name:mk4c0cd3041b29cfcb95b36c1e5eae64b45ad166 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:23:29.703582  485582 start.go:364] duration metric: took 85.129µs to acquireMachinesLock for "old-k8s-version-482317"
	I1227 10:23:29.703612  485582 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:23:29.703685  485582 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:23:29.709148  485582 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:23:29.709409  485582 start.go:159] libmachine.API.Create for "old-k8s-version-482317" (driver="docker")
	I1227 10:23:29.709441  485582 client.go:173] LocalClient.Create starting
	I1227 10:23:29.709515  485582 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:23:29.709550  485582 main.go:144] libmachine: Decoding PEM data...
	I1227 10:23:29.709573  485582 main.go:144] libmachine: Parsing certificate...
	I1227 10:23:29.709624  485582 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:23:29.709649  485582 main.go:144] libmachine: Decoding PEM data...
	I1227 10:23:29.709660  485582 main.go:144] libmachine: Parsing certificate...
	I1227 10:23:29.710016  485582 cli_runner.go:164] Run: docker network inspect old-k8s-version-482317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:23:29.727324  485582 cli_runner.go:211] docker network inspect old-k8s-version-482317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:23:29.727478  485582 network_create.go:284] running [docker network inspect old-k8s-version-482317] to gather additional debugging logs...
	I1227 10:23:29.727519  485582 cli_runner.go:164] Run: docker network inspect old-k8s-version-482317
	W1227 10:23:29.742546  485582 cli_runner.go:211] docker network inspect old-k8s-version-482317 returned with exit code 1
	I1227 10:23:29.742573  485582 network_create.go:287] error running [docker network inspect old-k8s-version-482317]: docker network inspect old-k8s-version-482317: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-482317 not found
	I1227 10:23:29.742587  485582 network_create.go:289] output of [docker network inspect old-k8s-version-482317]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-482317 not found
	
	** /stderr **
	I1227 10:23:29.742689  485582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:23:29.758042  485582 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:23:29.758473  485582 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:23:29.758778  485582 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:23:29.759248  485582 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d1430}
	I1227 10:23:29.759274  485582 network_create.go:124] attempt to create docker network old-k8s-version-482317 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:23:29.759332  485582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-482317 old-k8s-version-482317
	I1227 10:23:29.818304  485582 network_create.go:108] docker network old-k8s-version-482317 192.168.76.0/24 created
	I1227 10:23:29.818339  485582 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-482317" container
	I1227 10:23:29.818429  485582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:23:29.834777  485582 cli_runner.go:164] Run: docker volume create old-k8s-version-482317 --label name.minikube.sigs.k8s.io=old-k8s-version-482317 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:23:29.852797  485582 oci.go:103] Successfully created a docker volume old-k8s-version-482317
	I1227 10:23:29.852884  485582 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-482317-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-482317 --entrypoint /usr/bin/test -v old-k8s-version-482317:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:23:30.434646  485582 oci.go:107] Successfully prepared a docker volume old-k8s-version-482317
	I1227 10:23:30.434717  485582 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:23:30.434731  485582 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:23:30.434817  485582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-482317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:23:35.030678  485582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-482317:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.595804139s)
	I1227 10:23:35.030722  485582 kic.go:203] duration metric: took 4.595986147s to extract preloaded images to volume ...
	W1227 10:23:35.030883  485582 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:23:35.030995  485582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:23:35.084896  485582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-482317 --name old-k8s-version-482317 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-482317 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-482317 --network old-k8s-version-482317 --ip 192.168.76.2 --volume old-k8s-version-482317:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:23:35.416068  485582 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Running}}
	I1227 10:23:35.438153  485582 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:23:35.468211  485582 cli_runner.go:164] Run: docker exec old-k8s-version-482317 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:23:35.524437  485582 oci.go:144] the created container "old-k8s-version-482317" has a running status.
	I1227 10:23:35.524465  485582 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa...
	I1227 10:23:36.101596  485582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:23:36.123333  485582 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:23:36.145886  485582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:23:36.145906  485582 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-482317 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:23:36.187831  485582 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:23:36.205431  485582 machine.go:94] provisionDockerMachine start ...
	I1227 10:23:36.205553  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:23:36.222580  485582 main.go:144] libmachine: Using SSH client type: native
	I1227 10:23:36.222942  485582 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1227 10:23:36.222957  485582 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:23:36.223592  485582 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35292->127.0.0.1:33408: read: connection reset by peer
	I1227 10:23:39.359588  485582 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-482317
	
	I1227 10:23:39.359612  485582 ubuntu.go:182] provisioning hostname "old-k8s-version-482317"
	I1227 10:23:39.359677  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:23:39.377524  485582 main.go:144] libmachine: Using SSH client type: native
	I1227 10:23:39.377849  485582 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1227 10:23:39.377866  485582 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-482317 && echo "old-k8s-version-482317" | sudo tee /etc/hostname
	I1227 10:23:39.528941  485582 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-482317
	
	I1227 10:23:39.529068  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:23:39.551287  485582 main.go:144] libmachine: Using SSH client type: native
	I1227 10:23:39.551613  485582 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1227 10:23:39.551636  485582 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-482317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-482317/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-482317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:23:39.688357  485582 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:23:39.688402  485582 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:23:39.688422  485582 ubuntu.go:190] setting up certificates
	I1227 10:23:39.688431  485582 provision.go:84] configureAuth start
	I1227 10:23:39.688491  485582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-482317
	I1227 10:23:39.705546  485582 provision.go:143] copyHostCerts
	I1227 10:23:39.705630  485582 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:23:39.705644  485582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:23:39.705724  485582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:23:39.705828  485582 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:23:39.705842  485582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:23:39.705872  485582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:23:39.705940  485582 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:23:39.705951  485582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:23:39.705982  485582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:23:39.706050  485582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-482317 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-482317]
	I1227 10:23:39.979031  485582 provision.go:177] copyRemoteCerts
	I1227 10:23:39.979102  485582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:23:39.979153  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:23:39.996254  485582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:23:40.132555  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:23:40.152093  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1227 10:23:40.170833  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:23:40.190548  485582 provision.go:87] duration metric: took 502.10329ms to configureAuth
	I1227 10:23:40.190579  485582 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:23:40.190780  485582 config.go:182] Loaded profile config "old-k8s-version-482317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:23:40.190894  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:23:40.210053  485582 main.go:144] libmachine: Using SSH client type: native
	I1227 10:23:40.210356  485582 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1227 10:23:40.210690  485582 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:23:40.511483  485582 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:23:40.511534  485582 machine.go:97] duration metric: took 4.306076322s to provisionDockerMachine
	I1227 10:23:40.511547  485582 client.go:176] duration metric: took 10.802099864s to LocalClient.Create
	I1227 10:23:40.511578  485582 start.go:167] duration metric: took 10.802156013s to libmachine.API.Create "old-k8s-version-482317"
	I1227 10:23:40.511597  485582 start.go:293] postStartSetup for "old-k8s-version-482317" (driver="docker")
	I1227 10:23:40.511607  485582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:23:40.511673  485582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:23:40.511725  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:23:40.529292  485582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:23:40.629139  485582 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:23:40.632577  485582 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:23:40.632608  485582 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:23:40.632620  485582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:23:40.632675  485582 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:23:40.632765  485582 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:23:40.632876  485582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:23:40.640406  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:23:40.658350  485582 start.go:296] duration metric: took 146.737665ms for postStartSetup
	I1227 10:23:40.658736  485582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-482317
	I1227 10:23:40.676032  485582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/config.json ...
	I1227 10:23:40.676329  485582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:23:40.676396  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:23:40.694652  485582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:23:40.789439  485582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:23:40.794493  485582 start.go:128] duration metric: took 11.090793763s to createHost
	I1227 10:23:40.794520  485582 start.go:83] releasing machines lock for "old-k8s-version-482317", held for 11.090924899s
	I1227 10:23:40.794596  485582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-482317
	I1227 10:23:40.811917  485582 ssh_runner.go:195] Run: cat /version.json
	I1227 10:23:40.812116  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:23:40.812391  485582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:23:40.812452  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:23:40.833270  485582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:23:40.844646  485582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:23:41.026033  485582 ssh_runner.go:195] Run: systemctl --version
	I1227 10:23:41.032923  485582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:23:41.071062  485582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:23:41.076843  485582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:23:41.076917  485582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:23:41.107482  485582 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:23:41.107521  485582 start.go:496] detecting cgroup driver to use...
	I1227 10:23:41.107555  485582 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:23:41.107623  485582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:23:41.125798  485582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:23:41.138930  485582 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:23:41.138994  485582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:23:41.157329  485582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:23:41.177096  485582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:23:41.297589  485582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:23:41.427557  485582 docker.go:234] disabling docker service ...
	I1227 10:23:41.427636  485582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:23:41.450115  485582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:23:41.463363  485582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:23:41.578850  485582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:23:41.701988  485582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:23:41.718187  485582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:23:41.733983  485582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1227 10:23:41.734059  485582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:23:41.743804  485582 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:23:41.743897  485582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:23:41.753704  485582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:23:41.763120  485582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:23:41.772701  485582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:23:41.780975  485582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:23:41.789777  485582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:23:41.804346  485582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:23:41.813623  485582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:23:41.821533  485582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:23:41.830071  485582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:23:41.954444  485582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:23:42.143154  485582 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:23:42.143281  485582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:23:42.148511  485582 start.go:574] Will wait 60s for crictl version
	I1227 10:23:42.148665  485582 ssh_runner.go:195] Run: which crictl
	I1227 10:23:42.153926  485582 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:23:42.183470  485582 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:23:42.183638  485582 ssh_runner.go:195] Run: crio --version
	I1227 10:23:42.214990  485582 ssh_runner.go:195] Run: crio --version
	I1227 10:23:42.252653  485582 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1227 10:23:42.255598  485582 cli_runner.go:164] Run: docker network inspect old-k8s-version-482317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:23:42.280060  485582 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:23:42.285681  485582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:23:42.296538  485582 kubeadm.go:884] updating cluster {Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:23:42.296667  485582 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:23:42.296733  485582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:23:42.329995  485582 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:23:42.330023  485582 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:23:42.330080  485582 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:23:42.357502  485582 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:23:42.357526  485582 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:23:42.357535  485582 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1227 10:23:42.357625  485582 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-482317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:23:42.357705  485582 ssh_runner.go:195] Run: crio config
	I1227 10:23:42.411643  485582 cni.go:84] Creating CNI manager for ""
	I1227 10:23:42.411712  485582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:23:42.411752  485582 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:23:42.411812  485582 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-482317 NodeName:old-k8s-version-482317 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:23:42.412019  485582 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-482317"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:23:42.412115  485582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1227 10:23:42.420089  485582 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:23:42.420170  485582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:23:42.429115  485582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1227 10:23:42.442339  485582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:23:42.456154  485582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1227 10:23:42.471733  485582 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:23:42.476008  485582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:23:42.486582  485582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:23:42.606310  485582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:23:42.622806  485582 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317 for IP: 192.168.76.2
	I1227 10:23:42.622885  485582 certs.go:195] generating shared ca certs ...
	I1227 10:23:42.622917  485582 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:23:42.623121  485582 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:23:42.623215  485582 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:23:42.623246  485582 certs.go:257] generating profile certs ...
	I1227 10:23:42.623334  485582 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.key
	I1227 10:23:42.623383  485582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt with IP's: []
	I1227 10:23:42.748525  485582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt ...
	I1227 10:23:42.748559  485582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: {Name:mk4345ee39c9eb7d6626ec158fd18d7538e9901d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:23:42.748799  485582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.key ...
	I1227 10:23:42.748815  485582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.key: {Name:mk774ef1fdac4c601b249224a0ed90b0cd1bc40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:23:42.748931  485582 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.key.76d9b417
	I1227 10:23:42.748950  485582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.crt.76d9b417 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 10:23:42.918236  485582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.crt.76d9b417 ...
	I1227 10:23:42.918270  485582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.crt.76d9b417: {Name:mka751658fdfc6ffc587e4b405feac2561f6abd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:23:42.918458  485582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.key.76d9b417 ...
	I1227 10:23:42.918473  485582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.key.76d9b417: {Name:mk0c457f924a64e25c10e5e53ef1d0157c56cacb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:23:42.918559  485582 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.crt.76d9b417 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.crt
	I1227 10:23:42.918635  485582 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.key.76d9b417 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.key
	I1227 10:23:42.918694  485582 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.key
	I1227 10:23:42.918760  485582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.crt with IP's: []
	I1227 10:23:43.258850  485582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.crt ...
	I1227 10:23:43.258888  485582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.crt: {Name:mkb10127f53cf605faa0f518bf3ca5d8ea87ddc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:23:43.259064  485582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.key ...
	I1227 10:23:43.259081  485582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.key: {Name:mk3c0dc3aadf6a9da39ecb0aed9c4967f853751e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:23:43.259266  485582 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:23:43.259315  485582 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:23:43.259329  485582 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:23:43.259358  485582 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:23:43.259389  485582 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:23:43.259419  485582 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:23:43.259467  485582 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:23:43.260127  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:23:43.290332  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:23:43.311196  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:23:43.330719  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:23:43.349255  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 10:23:43.368626  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:23:43.386389  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:23:43.403798  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:23:43.421874  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:23:43.440889  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:23:43.459461  485582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:23:43.478404  485582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:23:43.492204  485582 ssh_runner.go:195] Run: openssl version
	I1227 10:23:43.499532  485582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:23:43.507383  485582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:23:43.515373  485582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:23:43.520088  485582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:23:43.520164  485582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:23:43.562131  485582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:23:43.569949  485582 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2998112.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:23:43.578170  485582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:23:43.585996  485582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:23:43.593806  485582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:23:43.597596  485582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:23:43.597667  485582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:23:43.640112  485582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:23:43.647871  485582 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:23:43.655523  485582 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:23:43.663107  485582 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:23:43.671069  485582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:23:43.675125  485582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:23:43.675222  485582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:23:43.717090  485582 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:23:43.724851  485582 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/299811.pem /etc/ssl/certs/51391683.0
	I1227 10:23:43.732546  485582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:23:43.736657  485582 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:23:43.736714  485582 kubeadm.go:401] StartCluster: {Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:23:43.736790  485582 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:23:43.736850  485582 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:23:43.764511  485582 cri.go:96] found id: ""
	I1227 10:23:43.764585  485582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:23:43.772737  485582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:23:43.780706  485582 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:23:43.780796  485582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:23:43.788456  485582 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:23:43.788479  485582 kubeadm.go:158] found existing configuration files:
	
	I1227 10:23:43.788540  485582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:23:43.796655  485582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:23:43.796729  485582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:23:43.804536  485582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:23:43.812786  485582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:23:43.812856  485582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:23:43.821054  485582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:23:43.830168  485582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:23:43.830287  485582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:23:43.838171  485582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:23:43.846013  485582 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:23:43.846083  485582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:23:43.853935  485582 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:23:43.902701  485582 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1227 10:23:43.902777  485582 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:23:43.941437  485582 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:23:43.941516  485582 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:23:43.941557  485582 kubeadm.go:319] OS: Linux
	I1227 10:23:43.941612  485582 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:23:43.941667  485582 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:23:43.941717  485582 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:23:43.941769  485582 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:23:43.941820  485582 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:23:43.941881  485582 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:23:43.941930  485582 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:23:43.941982  485582 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:23:43.942034  485582 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:23:44.050246  485582 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:23:44.050367  485582 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:23:44.050474  485582 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1227 10:23:44.208329  485582 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:23:44.215775  485582 out.go:252]   - Generating certificates and keys ...
	I1227 10:23:44.215880  485582 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:23:44.215954  485582 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:23:45.262560  485582 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:23:45.767312  485582 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:23:46.104550  485582 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:23:46.672660  485582 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:23:47.178550  485582 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:23:47.178978  485582 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-482317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:23:47.479154  485582 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:23:47.479859  485582 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-482317] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:23:47.813237  485582 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:23:48.011828  485582 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:23:48.332773  485582 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:23:48.333042  485582 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:23:48.820023  485582 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:23:49.281954  485582 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:23:49.573913  485582 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:23:49.842877  485582 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:23:49.843699  485582 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:23:49.846665  485582 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:23:49.850072  485582 out.go:252]   - Booting up control plane ...
	I1227 10:23:49.850183  485582 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:23:49.850262  485582 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:23:49.851150  485582 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:23:49.867416  485582 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:23:49.868296  485582 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:23:49.868362  485582 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:23:50.020472  485582 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1227 10:23:57.018821  485582 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.006788 seconds
	I1227 10:23:57.018948  485582 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 10:23:57.036784  485582 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 10:23:57.570286  485582 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 10:23:57.570506  485582 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-482317 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 10:23:58.084238  485582 kubeadm.go:319] [bootstrap-token] Using token: av83q6.1b7l4dy3bbtz65nm
	I1227 10:23:58.087163  485582 out.go:252]   - Configuring RBAC rules ...
	I1227 10:23:58.087297  485582 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 10:23:58.094621  485582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 10:23:58.103926  485582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 10:23:58.108765  485582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 10:23:58.113121  485582 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 10:23:58.119737  485582 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 10:23:58.133791  485582 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 10:23:58.430055  485582 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 10:23:58.505773  485582 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 10:23:58.507457  485582 kubeadm.go:319] 
	I1227 10:23:58.507535  485582 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 10:23:58.507548  485582 kubeadm.go:319] 
	I1227 10:23:58.507627  485582 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 10:23:58.507636  485582 kubeadm.go:319] 
	I1227 10:23:58.507661  485582 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 10:23:58.508231  485582 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 10:23:58.508293  485582 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 10:23:58.508304  485582 kubeadm.go:319] 
	I1227 10:23:58.508360  485582 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 10:23:58.508366  485582 kubeadm.go:319] 
	I1227 10:23:58.508414  485582 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 10:23:58.508423  485582 kubeadm.go:319] 
	I1227 10:23:58.508475  485582 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 10:23:58.508553  485582 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 10:23:58.508625  485582 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 10:23:58.508634  485582 kubeadm.go:319] 
	I1227 10:23:58.508914  485582 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 10:23:58.508999  485582 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 10:23:58.509008  485582 kubeadm.go:319] 
	I1227 10:23:58.509288  485582 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token av83q6.1b7l4dy3bbtz65nm \
	I1227 10:23:58.509400  485582 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 \
	I1227 10:23:58.509582  485582 kubeadm.go:319] 	--control-plane 
	I1227 10:23:58.509601  485582 kubeadm.go:319] 
	I1227 10:23:58.509884  485582 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 10:23:58.509897  485582 kubeadm.go:319] 
	I1227 10:23:58.510162  485582 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token av83q6.1b7l4dy3bbtz65nm \
	I1227 10:23:58.510491  485582 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 
	I1227 10:23:58.524265  485582 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:23:58.524386  485582 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:23:58.524402  485582 cni.go:84] Creating CNI manager for ""
	I1227 10:23:58.524409  485582 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:23:58.527690  485582 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 10:23:58.530782  485582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 10:23:58.536289  485582 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1227 10:23:58.536309  485582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 10:23:58.564084  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 10:23:59.531480  485582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 10:23:59.531621  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:23:59.531703  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-482317 minikube.k8s.io/updated_at=2025_12_27T10_23_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=old-k8s-version-482317 minikube.k8s.io/primary=true
	I1227 10:23:59.701080  485582 ops.go:34] apiserver oom_adj: -16
	I1227 10:23:59.701187  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:00.201499  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:00.702230  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:01.202078  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:01.702074  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:02.201369  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:02.702165  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:03.201325  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:03.701315  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:04.202156  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:04.702261  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:05.201370  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:05.702267  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:06.201974  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:06.701475  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:07.201254  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:07.701443  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:08.202264  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:08.701795  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:09.201962  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:09.702171  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:10.201278  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:10.701508  485582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:24:10.868723  485582 kubeadm.go:1114] duration metric: took 11.337148448s to wait for elevateKubeSystemPrivileges
	I1227 10:24:10.868754  485582 kubeadm.go:403] duration metric: took 27.132044468s to StartCluster
	I1227 10:24:10.868772  485582 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:24:10.868835  485582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:24:10.869478  485582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:24:10.869685  485582 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:24:10.869805  485582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:24:10.870050  485582 config.go:182] Loaded profile config "old-k8s-version-482317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:24:10.870091  485582 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:24:10.870163  485582 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-482317"
	I1227 10:24:10.870179  485582 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-482317"
	I1227 10:24:10.870203  485582 host.go:66] Checking if "old-k8s-version-482317" exists ...
	I1227 10:24:10.870710  485582 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:10.871276  485582 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-482317"
	I1227 10:24:10.871304  485582 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-482317"
	I1227 10:24:10.871570  485582 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:10.876047  485582 out.go:179] * Verifying Kubernetes components...
	I1227 10:24:10.880505  485582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:24:10.903827  485582 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-482317"
	I1227 10:24:10.903864  485582 host.go:66] Checking if "old-k8s-version-482317" exists ...
	I1227 10:24:10.911302  485582 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:10.914257  485582 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:24:10.917261  485582 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:24:10.917284  485582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:24:10.917346  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:10.942571  485582 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:24:10.942597  485582 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:24:10.942669  485582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:10.969031  485582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:10.986100  485582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:11.293669  485582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:24:11.293895  485582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:24:11.329564  485582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:24:11.398351  485582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:24:12.284912  485582 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-482317" to be "Ready" ...
	I1227 10:24:12.285343  485582 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 10:24:12.707684  485582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.378026115s)
	I1227 10:24:12.707742  485582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.309365985s)
	I1227 10:24:12.723755  485582 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 10:24:12.727444  485582 addons.go:530] duration metric: took 1.85734327s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 10:24:12.792811  485582 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-482317" context rescaled to 1 replicas
	W1227 10:24:14.288632  485582 node_ready.go:57] node "old-k8s-version-482317" has "Ready":"False" status (will retry)
	W1227 10:24:16.288817  485582 node_ready.go:57] node "old-k8s-version-482317" has "Ready":"False" status (will retry)
	W1227 10:24:18.289399  485582 node_ready.go:57] node "old-k8s-version-482317" has "Ready":"False" status (will retry)
	W1227 10:24:20.787783  485582 node_ready.go:57] node "old-k8s-version-482317" has "Ready":"False" status (will retry)
	W1227 10:24:22.789242  485582 node_ready.go:57] node "old-k8s-version-482317" has "Ready":"False" status (will retry)
	I1227 10:24:24.788688  485582 node_ready.go:49] node "old-k8s-version-482317" is "Ready"
	I1227 10:24:24.788729  485582 node_ready.go:38] duration metric: took 12.503774503s for node "old-k8s-version-482317" to be "Ready" ...
	I1227 10:24:24.788743  485582 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:24:24.788817  485582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:24:24.801272  485582 api_server.go:72] duration metric: took 13.931549659s to wait for apiserver process to appear ...
	I1227 10:24:24.801298  485582 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:24:24.801319  485582 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:24:24.812415  485582 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:24:24.813934  485582 api_server.go:141] control plane version: v1.28.0
	I1227 10:24:24.813969  485582 api_server.go:131] duration metric: took 12.663355ms to wait for apiserver health ...
	I1227 10:24:24.813979  485582 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:24:24.821820  485582 system_pods.go:59] 8 kube-system pods found
	I1227 10:24:24.821855  485582 system_pods.go:61] "coredns-5dd5756b68-xtcrs" [a1ff47cc-238c-4217-8591-ff8b26b907da] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:24:24.821865  485582 system_pods.go:61] "etcd-old-k8s-version-482317" [70dce620-1f12-49f9-8f70-ab1eb4c021eb] Running
	I1227 10:24:24.821917  485582 system_pods.go:61] "kindnet-4jvpn" [35d8c991-0977-4f5f-95d3-d06fdf9b1481] Running
	I1227 10:24:24.821923  485582 system_pods.go:61] "kube-apiserver-old-k8s-version-482317" [970f565c-b1c3-40cd-8165-f425b311a9e7] Running
	I1227 10:24:24.821928  485582 system_pods.go:61] "kube-controller-manager-old-k8s-version-482317" [41aa78cd-9c7b-49f7-bcc1-e85c6d9d606e] Running
	I1227 10:24:24.821933  485582 system_pods.go:61] "kube-proxy-gr6gq" [3a6b528b-199e-43a6-8a9b-f9157d3800a0] Running
	I1227 10:24:24.821940  485582 system_pods.go:61] "kube-scheduler-old-k8s-version-482317" [42afac7c-9449-4b76-b9d1-ef7655e77163] Running
	I1227 10:24:24.821946  485582 system_pods.go:61] "storage-provisioner" [0bd371c6-e3b4-4c0b-8a3a-f17eade42f06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:24:24.821971  485582 system_pods.go:74] duration metric: took 7.985647ms to wait for pod list to return data ...
	I1227 10:24:24.821988  485582 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:24:24.824631  485582 default_sa.go:45] found service account: "default"
	I1227 10:24:24.824660  485582 default_sa.go:55] duration metric: took 2.665389ms for default service account to be created ...
	I1227 10:24:24.824671  485582 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:24:24.828777  485582 system_pods.go:86] 8 kube-system pods found
	I1227 10:24:24.828859  485582 system_pods.go:89] "coredns-5dd5756b68-xtcrs" [a1ff47cc-238c-4217-8591-ff8b26b907da] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:24:24.828883  485582 system_pods.go:89] "etcd-old-k8s-version-482317" [70dce620-1f12-49f9-8f70-ab1eb4c021eb] Running
	I1227 10:24:24.828925  485582 system_pods.go:89] "kindnet-4jvpn" [35d8c991-0977-4f5f-95d3-d06fdf9b1481] Running
	I1227 10:24:24.828950  485582 system_pods.go:89] "kube-apiserver-old-k8s-version-482317" [970f565c-b1c3-40cd-8165-f425b311a9e7] Running
	I1227 10:24:24.828972  485582 system_pods.go:89] "kube-controller-manager-old-k8s-version-482317" [41aa78cd-9c7b-49f7-bcc1-e85c6d9d606e] Running
	I1227 10:24:24.829009  485582 system_pods.go:89] "kube-proxy-gr6gq" [3a6b528b-199e-43a6-8a9b-f9157d3800a0] Running
	I1227 10:24:24.829033  485582 system_pods.go:89] "kube-scheduler-old-k8s-version-482317" [42afac7c-9449-4b76-b9d1-ef7655e77163] Running
	I1227 10:24:24.829055  485582 system_pods.go:89] "storage-provisioner" [0bd371c6-e3b4-4c0b-8a3a-f17eade42f06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:24:24.829115  485582 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 10:24:25.084192  485582 system_pods.go:86] 8 kube-system pods found
	I1227 10:24:25.084279  485582 system_pods.go:89] "coredns-5dd5756b68-xtcrs" [a1ff47cc-238c-4217-8591-ff8b26b907da] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:24:25.084310  485582 system_pods.go:89] "etcd-old-k8s-version-482317" [70dce620-1f12-49f9-8f70-ab1eb4c021eb] Running
	I1227 10:24:25.084354  485582 system_pods.go:89] "kindnet-4jvpn" [35d8c991-0977-4f5f-95d3-d06fdf9b1481] Running
	I1227 10:24:25.084385  485582 system_pods.go:89] "kube-apiserver-old-k8s-version-482317" [970f565c-b1c3-40cd-8165-f425b311a9e7] Running
	I1227 10:24:25.084411  485582 system_pods.go:89] "kube-controller-manager-old-k8s-version-482317" [41aa78cd-9c7b-49f7-bcc1-e85c6d9d606e] Running
	I1227 10:24:25.084444  485582 system_pods.go:89] "kube-proxy-gr6gq" [3a6b528b-199e-43a6-8a9b-f9157d3800a0] Running
	I1227 10:24:25.084470  485582 system_pods.go:89] "kube-scheduler-old-k8s-version-482317" [42afac7c-9449-4b76-b9d1-ef7655e77163] Running
	I1227 10:24:25.084496  485582 system_pods.go:89] "storage-provisioner" [0bd371c6-e3b4-4c0b-8a3a-f17eade42f06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:24:25.373172  485582 system_pods.go:86] 8 kube-system pods found
	I1227 10:24:25.373212  485582 system_pods.go:89] "coredns-5dd5756b68-xtcrs" [a1ff47cc-238c-4217-8591-ff8b26b907da] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:24:25.373220  485582 system_pods.go:89] "etcd-old-k8s-version-482317" [70dce620-1f12-49f9-8f70-ab1eb4c021eb] Running
	I1227 10:24:25.373226  485582 system_pods.go:89] "kindnet-4jvpn" [35d8c991-0977-4f5f-95d3-d06fdf9b1481] Running
	I1227 10:24:25.373232  485582 system_pods.go:89] "kube-apiserver-old-k8s-version-482317" [970f565c-b1c3-40cd-8165-f425b311a9e7] Running
	I1227 10:24:25.373238  485582 system_pods.go:89] "kube-controller-manager-old-k8s-version-482317" [41aa78cd-9c7b-49f7-bcc1-e85c6d9d606e] Running
	I1227 10:24:25.373242  485582 system_pods.go:89] "kube-proxy-gr6gq" [3a6b528b-199e-43a6-8a9b-f9157d3800a0] Running
	I1227 10:24:25.373248  485582 system_pods.go:89] "kube-scheduler-old-k8s-version-482317" [42afac7c-9449-4b76-b9d1-ef7655e77163] Running
	I1227 10:24:25.373260  485582 system_pods.go:89] "storage-provisioner" [0bd371c6-e3b4-4c0b-8a3a-f17eade42f06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:24:25.812041  485582 system_pods.go:86] 8 kube-system pods found
	I1227 10:24:25.812078  485582 system_pods.go:89] "coredns-5dd5756b68-xtcrs" [a1ff47cc-238c-4217-8591-ff8b26b907da] Running
	I1227 10:24:25.812087  485582 system_pods.go:89] "etcd-old-k8s-version-482317" [70dce620-1f12-49f9-8f70-ab1eb4c021eb] Running
	I1227 10:24:25.812092  485582 system_pods.go:89] "kindnet-4jvpn" [35d8c991-0977-4f5f-95d3-d06fdf9b1481] Running
	I1227 10:24:25.812098  485582 system_pods.go:89] "kube-apiserver-old-k8s-version-482317" [970f565c-b1c3-40cd-8165-f425b311a9e7] Running
	I1227 10:24:25.812104  485582 system_pods.go:89] "kube-controller-manager-old-k8s-version-482317" [41aa78cd-9c7b-49f7-bcc1-e85c6d9d606e] Running
	I1227 10:24:25.812109  485582 system_pods.go:89] "kube-proxy-gr6gq" [3a6b528b-199e-43a6-8a9b-f9157d3800a0] Running
	I1227 10:24:25.812114  485582 system_pods.go:89] "kube-scheduler-old-k8s-version-482317" [42afac7c-9449-4b76-b9d1-ef7655e77163] Running
	I1227 10:24:25.812119  485582 system_pods.go:89] "storage-provisioner" [0bd371c6-e3b4-4c0b-8a3a-f17eade42f06] Running
	I1227 10:24:25.812127  485582 system_pods.go:126] duration metric: took 987.450827ms to wait for k8s-apps to be running ...
	I1227 10:24:25.812139  485582 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:24:25.812198  485582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:24:25.826367  485582 system_svc.go:56] duration metric: took 14.218037ms WaitForService to wait for kubelet
	I1227 10:24:25.826406  485582 kubeadm.go:587] duration metric: took 14.95668603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:24:25.826433  485582 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:24:25.829369  485582 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:24:25.829400  485582 node_conditions.go:123] node cpu capacity is 2
	I1227 10:24:25.829415  485582 node_conditions.go:105] duration metric: took 2.970426ms to run NodePressure ...
	I1227 10:24:25.829427  485582 start.go:242] waiting for startup goroutines ...
	I1227 10:24:25.829435  485582 start.go:247] waiting for cluster config update ...
	I1227 10:24:25.829446  485582 start.go:256] writing updated cluster config ...
	I1227 10:24:25.829752  485582 ssh_runner.go:195] Run: rm -f paused
	I1227 10:24:25.833890  485582 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:24:25.838146  485582 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xtcrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:25.844582  485582 pod_ready.go:94] pod "coredns-5dd5756b68-xtcrs" is "Ready"
	I1227 10:24:25.844613  485582 pod_ready.go:86] duration metric: took 6.44237ms for pod "coredns-5dd5756b68-xtcrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:25.847761  485582 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:25.852876  485582 pod_ready.go:94] pod "etcd-old-k8s-version-482317" is "Ready"
	I1227 10:24:25.852908  485582 pod_ready.go:86] duration metric: took 5.121889ms for pod "etcd-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:25.856168  485582 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:25.861589  485582 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-482317" is "Ready"
	I1227 10:24:25.861616  485582 pod_ready.go:86] duration metric: took 5.422823ms for pod "kube-apiserver-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:25.864756  485582 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:26.237703  485582 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-482317" is "Ready"
	I1227 10:24:26.237731  485582 pod_ready.go:86] duration metric: took 372.948492ms for pod "kube-controller-manager-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:26.438484  485582 pod_ready.go:83] waiting for pod "kube-proxy-gr6gq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:26.838250  485582 pod_ready.go:94] pod "kube-proxy-gr6gq" is "Ready"
	I1227 10:24:26.838277  485582 pod_ready.go:86] duration metric: took 399.766958ms for pod "kube-proxy-gr6gq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:27.039285  485582 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:27.438038  485582 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-482317" is "Ready"
	I1227 10:24:27.438069  485582 pod_ready.go:86] duration metric: took 398.757095ms for pod "kube-scheduler-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:24:27.438083  485582 pod_ready.go:40] duration metric: took 1.604159117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:24:27.504978  485582 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1227 10:24:27.508200  485582 out.go:203] 
	W1227 10:24:27.511065  485582 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 10:24:27.514950  485582 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:24:27.519197  485582 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-482317" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:24:25 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:25.064452247Z" level=info msg="Created container b3cdf821a70769f08767c4dce8c6995f69ce0c63a38c973cf574df5731e52f44: kube-system/coredns-5dd5756b68-xtcrs/coredns" id=40da3a44-b5d3-4f83-bb08-14b2c1f80b71 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:24:25 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:25.06541791Z" level=info msg="Starting container: b3cdf821a70769f08767c4dce8c6995f69ce0c63a38c973cf574df5731e52f44" id=81fded4b-d90a-473c-ae2e-a2ce432f8325 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:24:25 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:25.067260776Z" level=info msg="Started container" PID=1923 containerID=b3cdf821a70769f08767c4dce8c6995f69ce0c63a38c973cf574df5731e52f44 description=kube-system/coredns-5dd5756b68-xtcrs/coredns id=81fded4b-d90a-473c-ae2e-a2ce432f8325 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e7ed77cd9252668d22184f41c1c64150fc94700fe6f61679a72e33ad30da055
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.010828693Z" level=info msg="Running pod sandbox: default/busybox/POD" id=da0a3229-3703-411d-8f6a-12e544d0de88 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.010922068Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.016207101Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c8ee713044735e8fdc7bc8711f910ed50b4f9b4055d58bea2fec3481cafd4bb8 UID:b0c0fdc8-8b9b-4e39-882b-71311e66855c NetNS:/var/run/netns/d3b6e0f1-cdae-4888-9641-e74b3a7ea0de Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000f77d0}] Aliases:map[]}"
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.016245723Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.027151087Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c8ee713044735e8fdc7bc8711f910ed50b4f9b4055d58bea2fec3481cafd4bb8 UID:b0c0fdc8-8b9b-4e39-882b-71311e66855c NetNS:/var/run/netns/d3b6e0f1-cdae-4888-9641-e74b3a7ea0de Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000f77d0}] Aliases:map[]}"
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.027319475Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.032465422Z" level=info msg="Ran pod sandbox c8ee713044735e8fdc7bc8711f910ed50b4f9b4055d58bea2fec3481cafd4bb8 with infra container: default/busybox/POD" id=da0a3229-3703-411d-8f6a-12e544d0de88 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.033476016Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5e60ce08-24cc-43d3-a328-f7db1a8d2d4f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.033633351Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5e60ce08-24cc-43d3-a328-f7db1a8d2d4f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.033677733Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5e60ce08-24cc-43d3-a328-f7db1a8d2d4f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.034273981Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5879ec4c-d3e5-4878-811c-0eeb85ab8e5c name=/runtime.v1.ImageService/PullImage
	Dec 27 10:24:28 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:28.037283193Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 10:24:30 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:30.103774327Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5879ec4c-d3e5-4878-811c-0eeb85ab8e5c name=/runtime.v1.ImageService/PullImage
	Dec 27 10:24:30 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:30.10512968Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7d42ec97-6b8f-40ae-9059-a2c02594cc66 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:24:30 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:30.107098225Z" level=info msg="Creating container: default/busybox/busybox" id=ab020246-829d-4c51-92b5-2559f4dbf8e0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:24:30 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:30.107266448Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:24:30 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:30.113465574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:24:30 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:30.11401625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:24:30 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:30.132313672Z" level=info msg="Created container 9511c0d0d1de1f4adb5bc6dad44deaa5d9c8b2dafba97f37d1d912fec82466a1: default/busybox/busybox" id=ab020246-829d-4c51-92b5-2559f4dbf8e0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:24:30 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:30.133380054Z" level=info msg="Starting container: 9511c0d0d1de1f4adb5bc6dad44deaa5d9c8b2dafba97f37d1d912fec82466a1" id=b7252804-ba69-41d8-ad45-ce20174d0c6d name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:24:30 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:30.13517953Z" level=info msg="Started container" PID=1977 containerID=9511c0d0d1de1f4adb5bc6dad44deaa5d9c8b2dafba97f37d1d912fec82466a1 description=default/busybox/busybox id=b7252804-ba69-41d8-ad45-ce20174d0c6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8ee713044735e8fdc7bc8711f910ed50b4f9b4055d58bea2fec3481cafd4bb8
	Dec 27 10:24:36 old-k8s-version-482317 crio[835]: time="2025-12-27T10:24:36.896610141Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	9511c0d0d1de1       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   c8ee713044735       busybox                                          default
	b3cdf821a7076       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   1e7ed77cd9252       coredns-5dd5756b68-xtcrs                         kube-system
	dbbed544364df       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   8c093f705f159       storage-provisioner                              kube-system
	56b2a48b58ef4       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   aa16916e36f2c       kindnet-4jvpn                                    kube-system
	f396db7e7b408       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   12838bfc4de7a       kube-proxy-gr6gq                                 kube-system
	43a68063486b2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      46 seconds ago      Running             etcd                      0                   abe359d6e3e5a       etcd-old-k8s-version-482317                      kube-system
	21f1d96300d34       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      46 seconds ago      Running             kube-controller-manager   0                   5314a989160a4       kube-controller-manager-old-k8s-version-482317   kube-system
	2cf57083983c3       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      46 seconds ago      Running             kube-apiserver            0                   58d70d81bfa74       kube-apiserver-old-k8s-version-482317            kube-system
	e05319b079c13       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   e0ea736c0dde4       kube-scheduler-old-k8s-version-482317            kube-system
	
	
	==> coredns [b3cdf821a70769f08767c4dce8c6995f69ce0c63a38c973cf574df5731e52f44] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46867 - 33834 "HINFO IN 2442530307932471797.4042155587894045793. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013580837s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-482317
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-482317
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=old-k8s-version-482317
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_23_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:23:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-482317
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:24:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:24:28 +0000   Sat, 27 Dec 2025 10:23:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:24:28 +0000   Sat, 27 Dec 2025 10:23:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:24:28 +0000   Sat, 27 Dec 2025 10:23:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:24:28 +0000   Sat, 27 Dec 2025 10:24:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-482317
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                a7aa5659-6104-4ad4-974f-9a450eb0c75f
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-xtcrs                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-482317                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-4jvpn                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-482317             250m (12%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-482317    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-gr6gq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-482317             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node old-k8s-version-482317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-482317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-482317 event: Registered Node old-k8s-version-482317 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-482317 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec27 09:45] overlayfs: idmapped layers are currently not supported
	[  +3.382865] overlayfs: idmapped layers are currently not supported
	[Dec27 09:53] overlayfs: idmapped layers are currently not supported
	[Dec27 09:57] overlayfs: idmapped layers are currently not supported
	[Dec27 09:58] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [43a68063486b2e481992cc2573417732b4ea755a28d5e5365553b819047d8c82] <==
	{"level":"info","ts":"2025-12-27T10:23:51.582394Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:23:51.588017Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:23:51.588082Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:23:51.585745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T10:23:51.588374Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-27T10:23:51.588965Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:23:51.589239Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:23:52.524035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T10:23:52.524192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T10:23:52.524241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T10:23:52.524299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:23:52.52433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:23:52.524375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T10:23:52.524408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:23:52.528164Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-482317 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:23:52.52832Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:23:52.529357Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:23:52.529468Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:23:52.530289Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:23:52.532392Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:23:52.5341Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:23:52.534171Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:23:52.536057Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:23:52.536135Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:23:52.536168Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 10:24:38 up  2:07,  0 user,  load average: 1.24, 1.32, 1.80
	Linux old-k8s-version-482317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [56b2a48b58ef48cbf591e8827746dd3c90d220f51d3c714b8982fb9baa220e99] <==
	I1227 10:24:14.318533       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:24:14.318946       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:24:14.319107       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:24:14.319149       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:24:14.319190       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:24:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:24:14.520901       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:24:14.523873       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:24:14.523956       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:24:14.524139       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:24:14.724499       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:24:14.724613       1 metrics.go:72] Registering metrics
	I1227 10:24:14.724701       1 controller.go:711] "Syncing nftables rules"
	I1227 10:24:24.529200       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:24:24.529316       1 main.go:301] handling current node
	I1227 10:24:34.521744       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:24:34.521785       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2cf57083983c34d0caae1ea8ad381d34d98725a54a8ebcc602195533edb60384] <==
	I1227 10:23:55.050641       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:23:55.054580       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 10:23:55.054853       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 10:23:55.054924       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:23:55.063034       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 10:23:55.063140       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 10:23:55.082959       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 10:23:55.084813       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 10:23:55.136905       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 10:23:55.183432       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:23:55.843112       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1227 10:23:55.848678       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1227 10:23:55.848702       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 10:23:56.436614       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:23:56.483617       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:23:56.586595       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 10:23:56.593896       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 10:23:56.595012       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 10:23:56.600032       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:23:56.908459       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 10:23:58.414926       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 10:23:58.428358       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 10:23:58.442399       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1227 10:24:10.067703       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1227 10:24:10.665560       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [21f1d96300d346b406b778988b0c81c64e4b72232473cb9309efb2958402c275] <==
	I1227 10:24:10.073452       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1227 10:24:10.302701       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:24:10.302760       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 10:24:10.319663       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:24:10.676508       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gr6gq"
	I1227 10:24:10.684163       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4jvpn"
	I1227 10:24:10.805442       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2r948"
	I1227 10:24:10.833025       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xtcrs"
	I1227 10:24:10.843822       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="770.121323ms"
	I1227 10:24:10.867823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.893104ms"
	I1227 10:24:10.868110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.284µs"
	I1227 10:24:10.913823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.797µs"
	I1227 10:24:12.332354       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1227 10:24:12.366408       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-2r948"
	I1227 10:24:12.390775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.210321ms"
	I1227 10:24:12.401314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.482968ms"
	I1227 10:24:12.402287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.161µs"
	I1227 10:24:24.663876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.863µs"
	I1227 10:24:24.687203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.002µs"
	I1227 10:24:24.897361       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1227 10:24:24.898214       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-xtcrs" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-xtcrs"
	I1227 10:24:24.898325       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1227 10:24:25.709394       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.37µs"
	I1227 10:24:25.761855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.607381ms"
	I1227 10:24:25.761962       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.146µs"
	
	
	==> kube-proxy [f396db7e7b408e741f8c8a25346ce95aebb49d028d6ccfc0e7f4e67edda51ff5] <==
	I1227 10:24:11.293343       1 server_others.go:69] "Using iptables proxy"
	I1227 10:24:11.346605       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1227 10:24:11.385989       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:24:11.396951       1 server_others.go:152] "Using iptables Proxier"
	I1227 10:24:11.396990       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 10:24:11.396997       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 10:24:11.397024       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 10:24:11.397214       1 server.go:846] "Version info" version="v1.28.0"
	I1227 10:24:11.397224       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:24:11.399284       1 config.go:188] "Starting service config controller"
	I1227 10:24:11.399310       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 10:24:11.399336       1 config.go:97] "Starting endpoint slice config controller"
	I1227 10:24:11.399340       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 10:24:11.399951       1 config.go:315] "Starting node config controller"
	I1227 10:24:11.399960       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 10:24:11.500441       1 shared_informer.go:318] Caches are synced for node config
	I1227 10:24:11.500474       1 shared_informer.go:318] Caches are synced for service config
	I1227 10:24:11.500502       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e05319b079c13ad3e46ae9a256baa3c8c62f7642f478def3bdd3cba434e3f758] <==
	W1227 10:23:55.116942       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1227 10:23:55.116957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1227 10:23:55.117011       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1227 10:23:55.117024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1227 10:23:55.117157       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1227 10:23:55.117215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1227 10:23:55.119732       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1227 10:23:55.119772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1227 10:23:55.964281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1227 10:23:55.964321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1227 10:23:55.982300       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1227 10:23:55.982411       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:23:56.003034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1227 10:23:56.003142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1227 10:23:56.048549       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1227 10:23:56.048726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1227 10:23:56.048685       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1227 10:23:56.048815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1227 10:23:56.109709       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1227 10:23:56.109826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1227 10:23:56.232888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1227 10:23:56.232929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1227 10:23:56.244441       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1227 10:23:56.244553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1227 10:23:58.490969       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 10:24:10 old-k8s-version-482317 kubelet[1367]: I1227 10:24:10.725433    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35d8c991-0977-4f5f-95d3-d06fdf9b1481-xtables-lock\") pod \"kindnet-4jvpn\" (UID: \"35d8c991-0977-4f5f-95d3-d06fdf9b1481\") " pod="kube-system/kindnet-4jvpn"
	Dec 27 10:24:10 old-k8s-version-482317 kubelet[1367]: I1227 10:24:10.725558    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35d8c991-0977-4f5f-95d3-d06fdf9b1481-lib-modules\") pod \"kindnet-4jvpn\" (UID: \"35d8c991-0977-4f5f-95d3-d06fdf9b1481\") " pod="kube-system/kindnet-4jvpn"
	Dec 27 10:24:10 old-k8s-version-482317 kubelet[1367]: I1227 10:24:10.725585    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a6b528b-199e-43a6-8a9b-f9157d3800a0-kube-proxy\") pod \"kube-proxy-gr6gq\" (UID: \"3a6b528b-199e-43a6-8a9b-f9157d3800a0\") " pod="kube-system/kube-proxy-gr6gq"
	Dec 27 10:24:10 old-k8s-version-482317 kubelet[1367]: I1227 10:24:10.725644    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a6b528b-199e-43a6-8a9b-f9157d3800a0-lib-modules\") pod \"kube-proxy-gr6gq\" (UID: \"3a6b528b-199e-43a6-8a9b-f9157d3800a0\") " pod="kube-system/kube-proxy-gr6gq"
	Dec 27 10:24:10 old-k8s-version-482317 kubelet[1367]: I1227 10:24:10.725669    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvw2\" (UniqueName: \"kubernetes.io/projected/35d8c991-0977-4f5f-95d3-d06fdf9b1481-kube-api-access-pgvw2\") pod \"kindnet-4jvpn\" (UID: \"35d8c991-0977-4f5f-95d3-d06fdf9b1481\") " pod="kube-system/kindnet-4jvpn"
	Dec 27 10:24:10 old-k8s-version-482317 kubelet[1367]: I1227 10:24:10.725765    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a6b528b-199e-43a6-8a9b-f9157d3800a0-xtables-lock\") pod \"kube-proxy-gr6gq\" (UID: \"3a6b528b-199e-43a6-8a9b-f9157d3800a0\") " pod="kube-system/kube-proxy-gr6gq"
	Dec 27 10:24:10 old-k8s-version-482317 kubelet[1367]: I1227 10:24:10.725799    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/35d8c991-0977-4f5f-95d3-d06fdf9b1481-cni-cfg\") pod \"kindnet-4jvpn\" (UID: \"35d8c991-0977-4f5f-95d3-d06fdf9b1481\") " pod="kube-system/kindnet-4jvpn"
	Dec 27 10:24:11 old-k8s-version-482317 kubelet[1367]: W1227 10:24:11.044344    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/crio-12838bfc4de7a18d7b79ec9d07de73627ad28c6ba04301707402e2edb26156de WatchSource:0}: Error finding container 12838bfc4de7a18d7b79ec9d07de73627ad28c6ba04301707402e2edb26156de: Status 404 returned error can't find the container with id 12838bfc4de7a18d7b79ec9d07de73627ad28c6ba04301707402e2edb26156de
	Dec 27 10:24:11 old-k8s-version-482317 kubelet[1367]: W1227 10:24:11.075741    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/crio-aa16916e36f2c348186fa3f1eb1cb4e8ece9559b0821944214d56b31b1f4653a WatchSource:0}: Error finding container aa16916e36f2c348186fa3f1eb1cb4e8ece9559b0821944214d56b31b1f4653a: Status 404 returned error can't find the container with id aa16916e36f2c348186fa3f1eb1cb4e8ece9559b0821944214d56b31b1f4653a
	Dec 27 10:24:14 old-k8s-version-482317 kubelet[1367]: I1227 10:24:14.686513    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gr6gq" podStartSLOduration=4.686458185 podCreationTimestamp="2025-12-27 10:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:24:11.682105869 +0000 UTC m=+13.299054531" watchObservedRunningTime="2025-12-27 10:24:14.686458185 +0000 UTC m=+16.303406831"
	Dec 27 10:24:24 old-k8s-version-482317 kubelet[1367]: I1227 10:24:24.630702    1367 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 27 10:24:24 old-k8s-version-482317 kubelet[1367]: I1227 10:24:24.663281    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4jvpn" podStartSLOduration=11.618239123 podCreationTimestamp="2025-12-27 10:24:10 +0000 UTC" firstStartedPulling="2025-12-27 10:24:11.088156524 +0000 UTC m=+12.705105170" lastFinishedPulling="2025-12-27 10:24:14.133148448 +0000 UTC m=+15.750097102" observedRunningTime="2025-12-27 10:24:14.687350485 +0000 UTC m=+16.304299139" watchObservedRunningTime="2025-12-27 10:24:24.663231055 +0000 UTC m=+26.280179709"
	Dec 27 10:24:24 old-k8s-version-482317 kubelet[1367]: I1227 10:24:24.663665    1367 topology_manager.go:215] "Topology Admit Handler" podUID="a1ff47cc-238c-4217-8591-ff8b26b907da" podNamespace="kube-system" podName="coredns-5dd5756b68-xtcrs"
	Dec 27 10:24:24 old-k8s-version-482317 kubelet[1367]: I1227 10:24:24.665592    1367 topology_manager.go:215] "Topology Admit Handler" podUID="0bd371c6-e3b4-4c0b-8a3a-f17eade42f06" podNamespace="kube-system" podName="storage-provisioner"
	Dec 27 10:24:24 old-k8s-version-482317 kubelet[1367]: I1227 10:24:24.743126    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgzp\" (UniqueName: \"kubernetes.io/projected/0bd371c6-e3b4-4c0b-8a3a-f17eade42f06-kube-api-access-7kgzp\") pod \"storage-provisioner\" (UID: \"0bd371c6-e3b4-4c0b-8a3a-f17eade42f06\") " pod="kube-system/storage-provisioner"
	Dec 27 10:24:24 old-k8s-version-482317 kubelet[1367]: I1227 10:24:24.743387    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9stfj\" (UniqueName: \"kubernetes.io/projected/a1ff47cc-238c-4217-8591-ff8b26b907da-kube-api-access-9stfj\") pod \"coredns-5dd5756b68-xtcrs\" (UID: \"a1ff47cc-238c-4217-8591-ff8b26b907da\") " pod="kube-system/coredns-5dd5756b68-xtcrs"
	Dec 27 10:24:24 old-k8s-version-482317 kubelet[1367]: I1227 10:24:24.743482    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1ff47cc-238c-4217-8591-ff8b26b907da-config-volume\") pod \"coredns-5dd5756b68-xtcrs\" (UID: \"a1ff47cc-238c-4217-8591-ff8b26b907da\") " pod="kube-system/coredns-5dd5756b68-xtcrs"
	Dec 27 10:24:24 old-k8s-version-482317 kubelet[1367]: I1227 10:24:24.743565    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0bd371c6-e3b4-4c0b-8a3a-f17eade42f06-tmp\") pod \"storage-provisioner\" (UID: \"0bd371c6-e3b4-4c0b-8a3a-f17eade42f06\") " pod="kube-system/storage-provisioner"
	Dec 27 10:24:24 old-k8s-version-482317 kubelet[1367]: W1227 10:24:24.980531    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/crio-8c093f705f1595ba9ecdddc9936ecee3b0f2305113fd3de1174936dad4b80533 WatchSource:0}: Error finding container 8c093f705f1595ba9ecdddc9936ecee3b0f2305113fd3de1174936dad4b80533: Status 404 returned error can't find the container with id 8c093f705f1595ba9ecdddc9936ecee3b0f2305113fd3de1174936dad4b80533
	Dec 27 10:24:25 old-k8s-version-482317 kubelet[1367]: W1227 10:24:25.010996    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/crio-1e7ed77cd9252668d22184f41c1c64150fc94700fe6f61679a72e33ad30da055 WatchSource:0}: Error finding container 1e7ed77cd9252668d22184f41c1c64150fc94700fe6f61679a72e33ad30da055: Status 404 returned error can't find the container with id 1e7ed77cd9252668d22184f41c1c64150fc94700fe6f61679a72e33ad30da055
	Dec 27 10:24:25 old-k8s-version-482317 kubelet[1367]: I1227 10:24:25.713131    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xtcrs" podStartSLOduration=15.713090266 podCreationTimestamp="2025-12-27 10:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:24:25.712736999 +0000 UTC m=+27.329685645" watchObservedRunningTime="2025-12-27 10:24:25.713090266 +0000 UTC m=+27.330038912"
	Dec 27 10:24:25 old-k8s-version-482317 kubelet[1367]: I1227 10:24:25.746803    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.746760937 podCreationTimestamp="2025-12-27 10:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:24:25.732676127 +0000 UTC m=+27.349624781" watchObservedRunningTime="2025-12-27 10:24:25.746760937 +0000 UTC m=+27.363709582"
	Dec 27 10:24:27 old-k8s-version-482317 kubelet[1367]: I1227 10:24:27.708542    1367 topology_manager.go:215] "Topology Admit Handler" podUID="b0c0fdc8-8b9b-4e39-882b-71311e66855c" podNamespace="default" podName="busybox"
	Dec 27 10:24:27 old-k8s-version-482317 kubelet[1367]: I1227 10:24:27.762968    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nbjp\" (UniqueName: \"kubernetes.io/projected/b0c0fdc8-8b9b-4e39-882b-71311e66855c-kube-api-access-5nbjp\") pod \"busybox\" (UID: \"b0c0fdc8-8b9b-4e39-882b-71311e66855c\") " pod="default/busybox"
	Dec 27 10:24:28 old-k8s-version-482317 kubelet[1367]: W1227 10:24:28.029470    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/crio-c8ee713044735e8fdc7bc8711f910ed50b4f9b4055d58bea2fec3481cafd4bb8 WatchSource:0}: Error finding container c8ee713044735e8fdc7bc8711f910ed50b4f9b4055d58bea2fec3481cafd4bb8: Status 404 returned error can't find the container with id c8ee713044735e8fdc7bc8711f910ed50b4f9b4055d58bea2fec3481cafd4bb8
	
	
	==> storage-provisioner [dbbed544364df96c6ded9209cf2c57658c21e2e1763d2c86c7c4fa6f725944d3] <==
	I1227 10:24:25.052401       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:24:25.075191       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:24:25.075314       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 10:24:25.098168       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:24:25.100818       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-482317_21de5e36-9729-4160-adcc-af722726b284!
	I1227 10:24:25.098322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a92e87e6-a9f2-4729-a034-2de7c1eae4b3", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-482317_21de5e36-9729-4160-adcc-af722726b284 became leader
	I1227 10:24:25.201587       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-482317_21de5e36-9729-4160-adcc-af722726b284!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-482317 -n old-k8s-version-482317
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-482317 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-482317 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-482317 --alsologtostderr -v=1: exit status 80 (2.032870387s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-482317 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:25:55.725290  492453 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:25:55.725402  492453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:25:55.725413  492453 out.go:374] Setting ErrFile to fd 2...
	I1227 10:25:55.725419  492453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:25:55.725757  492453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:25:55.726081  492453 out.go:368] Setting JSON to false
	I1227 10:25:55.726106  492453 mustload.go:66] Loading cluster: old-k8s-version-482317
	I1227 10:25:55.726801  492453 config.go:182] Loaded profile config "old-k8s-version-482317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:25:55.727522  492453 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:25:55.747719  492453 host.go:66] Checking if "old-k8s-version-482317" exists ...
	I1227 10:25:55.748123  492453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:25:55.826237  492453 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 10:25:55.815228514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:25:55.826945  492453 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-482317 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:25:55.832586  492453 out.go:179] * Pausing node old-k8s-version-482317 ... 
	I1227 10:25:55.835579  492453 host.go:66] Checking if "old-k8s-version-482317" exists ...
	I1227 10:25:55.835946  492453 ssh_runner.go:195] Run: systemctl --version
	I1227 10:25:55.836042  492453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:25:55.853568  492453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:25:55.951475  492453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:25:55.979826  492453 pause.go:52] kubelet running: true
	I1227 10:25:55.979920  492453 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:25:56.227377  492453 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:25:56.227480  492453 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:25:56.301561  492453 cri.go:96] found id: "7b0ce55a826e7e76318fd3f47cc892955448f52ed3d924c467eb4effc59b9afa"
	I1227 10:25:56.301587  492453 cri.go:96] found id: "0579bd17b999c40f300161843dca65348d880147d408e942833f0e8a1efa1b67"
	I1227 10:25:56.301593  492453 cri.go:96] found id: "9b09d87f39c5152bb435531967cef400dd6c3797b38f7965e024bc264e021c98"
	I1227 10:25:56.301597  492453 cri.go:96] found id: "a6a43cacb933af66d20a2d7793c31b0b116cbea7d00c6ae9dceb483bf2f0b2bd"
	I1227 10:25:56.301617  492453 cri.go:96] found id: "42968e8e6aa87735f51cd79fbe984e9063af3193c09105ee955eb81677f295b5"
	I1227 10:25:56.301627  492453 cri.go:96] found id: "6af676216486829c31c72726886da4d5b9d2fdd5e03d47e9d092cd74c92823fd"
	I1227 10:25:56.301631  492453 cri.go:96] found id: "5c18700dae648beeb6cbc946e81f00349e9db29024a7fcb4389e4ebb5f3220e3"
	I1227 10:25:56.301634  492453 cri.go:96] found id: "7904f50147b3a49201ac12cc375f895cdfbd6570c8043be40e8f86a6040e4ba7"
	I1227 10:25:56.301638  492453 cri.go:96] found id: "edba935460de1d0d6cf628ac3e09f2ff27ad3160fda618c38a75071e4b54afcc"
	I1227 10:25:56.301646  492453 cri.go:96] found id: "85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1"
	I1227 10:25:56.301653  492453 cri.go:96] found id: "0e925ff8e67acd4543b394e63d2b4c088abc3bdd579e1132f3c6096feceec216"
	I1227 10:25:56.301657  492453 cri.go:96] found id: ""
	I1227 10:25:56.301716  492453 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:25:56.313083  492453 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:25:56Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:25:56.604613  492453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:25:56.618006  492453 pause.go:52] kubelet running: false
	I1227 10:25:56.618103  492453 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:25:56.787780  492453 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:25:56.787858  492453 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:25:56.858116  492453 cri.go:96] found id: "7b0ce55a826e7e76318fd3f47cc892955448f52ed3d924c467eb4effc59b9afa"
	I1227 10:25:56.858138  492453 cri.go:96] found id: "0579bd17b999c40f300161843dca65348d880147d408e942833f0e8a1efa1b67"
	I1227 10:25:56.858143  492453 cri.go:96] found id: "9b09d87f39c5152bb435531967cef400dd6c3797b38f7965e024bc264e021c98"
	I1227 10:25:56.858147  492453 cri.go:96] found id: "a6a43cacb933af66d20a2d7793c31b0b116cbea7d00c6ae9dceb483bf2f0b2bd"
	I1227 10:25:56.858150  492453 cri.go:96] found id: "42968e8e6aa87735f51cd79fbe984e9063af3193c09105ee955eb81677f295b5"
	I1227 10:25:56.858153  492453 cri.go:96] found id: "6af676216486829c31c72726886da4d5b9d2fdd5e03d47e9d092cd74c92823fd"
	I1227 10:25:56.858157  492453 cri.go:96] found id: "5c18700dae648beeb6cbc946e81f00349e9db29024a7fcb4389e4ebb5f3220e3"
	I1227 10:25:56.858161  492453 cri.go:96] found id: "7904f50147b3a49201ac12cc375f895cdfbd6570c8043be40e8f86a6040e4ba7"
	I1227 10:25:56.858164  492453 cri.go:96] found id: "edba935460de1d0d6cf628ac3e09f2ff27ad3160fda618c38a75071e4b54afcc"
	I1227 10:25:56.858170  492453 cri.go:96] found id: "85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1"
	I1227 10:25:56.858195  492453 cri.go:96] found id: "0e925ff8e67acd4543b394e63d2b4c088abc3bdd579e1132f3c6096feceec216"
	I1227 10:25:56.858209  492453 cri.go:96] found id: ""
	I1227 10:25:56.858291  492453 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:25:57.406876  492453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:25:57.421067  492453 pause.go:52] kubelet running: false
	I1227 10:25:57.421164  492453 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:25:57.600694  492453 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:25:57.600805  492453 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:25:57.671104  492453 cri.go:96] found id: "7b0ce55a826e7e76318fd3f47cc892955448f52ed3d924c467eb4effc59b9afa"
	I1227 10:25:57.671126  492453 cri.go:96] found id: "0579bd17b999c40f300161843dca65348d880147d408e942833f0e8a1efa1b67"
	I1227 10:25:57.671132  492453 cri.go:96] found id: "9b09d87f39c5152bb435531967cef400dd6c3797b38f7965e024bc264e021c98"
	I1227 10:25:57.671135  492453 cri.go:96] found id: "a6a43cacb933af66d20a2d7793c31b0b116cbea7d00c6ae9dceb483bf2f0b2bd"
	I1227 10:25:57.671139  492453 cri.go:96] found id: "42968e8e6aa87735f51cd79fbe984e9063af3193c09105ee955eb81677f295b5"
	I1227 10:25:57.671142  492453 cri.go:96] found id: "6af676216486829c31c72726886da4d5b9d2fdd5e03d47e9d092cd74c92823fd"
	I1227 10:25:57.671145  492453 cri.go:96] found id: "5c18700dae648beeb6cbc946e81f00349e9db29024a7fcb4389e4ebb5f3220e3"
	I1227 10:25:57.671148  492453 cri.go:96] found id: "7904f50147b3a49201ac12cc375f895cdfbd6570c8043be40e8f86a6040e4ba7"
	I1227 10:25:57.671151  492453 cri.go:96] found id: "edba935460de1d0d6cf628ac3e09f2ff27ad3160fda618c38a75071e4b54afcc"
	I1227 10:25:57.671157  492453 cri.go:96] found id: "85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1"
	I1227 10:25:57.671160  492453 cri.go:96] found id: "0e925ff8e67acd4543b394e63d2b4c088abc3bdd579e1132f3c6096feceec216"
	I1227 10:25:57.671163  492453 cri.go:96] found id: ""
	I1227 10:25:57.671214  492453 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:25:57.685925  492453 out.go:203] 
	W1227 10:25:57.688831  492453 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:25:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:25:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:25:57.688852  492453 out.go:285] * 
	* 
	W1227 10:25:57.691333  492453 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:25:57.694253  492453 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-482317 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-482317
helpers_test.go:244: (dbg) docker inspect old-k8s-version-482317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb",
	        "Created": "2025-12-27T10:23:35.1004286Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:24:52.20848896Z",
	            "FinishedAt": "2025-12-27T10:24:51.124394197Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/hosts",
	        "LogPath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb-json.log",
	        "Name": "/old-k8s-version-482317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-482317:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-482317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb",
	                "LowerDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-482317",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-482317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-482317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-482317",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-482317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "197e095a150270c4048f6a2ed45438acfce8260cd4a021112dc8d3040f28eae8",
	            "SandboxKey": "/var/run/docker/netns/197e095a1502",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-482317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:79:ae:9b:4e:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76ec4721253c21d528fd72ab4bdb6e7b5be9293e371f48ba721c982435ec2193",
	                    "EndpointID": "d8adba187b1225efb8482c6c4e589720577d8ff4456cc59312aa24abd9af1eff",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-482317",
	                        "d3ed077d2566"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482317 -n old-k8s-version-482317
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482317 -n old-k8s-version-482317: exit status 2 (402.70882ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-482317 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-482317 logs -n 25: (1.286585363s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-785247 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo containerd config dump                                                                                                                                                                                                  │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo crio config                                                                                                                                                                                                             │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ delete  │ -p cilium-785247                                                                                                                                                                                                                              │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:17 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ delete  │ -p cert-expiration-528820                                                                                                                                                                                                                     │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ start   │ -p force-systemd-flag-915850 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-915850 │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │                     │
	│ delete  │ -p force-systemd-env-193016                                                                                                                                                                                                                   │ force-systemd-env-193016  │ jenkins │ v1.37.0 │ 27 Dec 25 10:22 UTC │ 27 Dec 25 10:22 UTC │
	│ start   │ -p cert-options-810217 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ cert-options-810217 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ -p cert-options-810217 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ delete  │ -p cert-options-810217                                                                                                                                                                                                                        │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │                     │
	│ stop    │ -p old-k8s-version-482317 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-482317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:25 UTC │
	│ image   │ old-k8s-version-482317 image list --format=json                                                                                                                                                                                               │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │ 27 Dec 25 10:25 UTC │
	│ pause   │ -p old-k8s-version-482317 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:24:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:24:51.776484  489746 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:24:51.776685  489746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:24:51.776711  489746 out.go:374] Setting ErrFile to fd 2...
	I1227 10:24:51.776732  489746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:24:51.777032  489746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:24:51.777455  489746 out.go:368] Setting JSON to false
	I1227 10:24:51.778376  489746 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7645,"bootTime":1766823447,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:24:51.778466  489746 start.go:143] virtualization:  
	I1227 10:24:51.783722  489746 out.go:179] * [old-k8s-version-482317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:24:51.786967  489746 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:24:51.787045  489746 notify.go:221] Checking for updates...
	I1227 10:24:51.791553  489746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:24:51.794607  489746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:24:51.797617  489746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:24:51.800666  489746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:24:51.803721  489746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:24:51.807244  489746 config.go:182] Loaded profile config "old-k8s-version-482317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:24:51.810897  489746 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 10:24:50.696647  478121 out.go:252]   - Booting up control plane ...
	I1227 10:24:50.696778  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:24:50.696893  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:24:50.696978  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:24:50.697112  478121 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:24:50.697242  478121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:24:50.697361  478121 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:24:50.697455  478121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:24:50.697499  478121 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:24:50.697634  478121 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:24:50.697747  478121 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:24:50.697815  478121 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001046036s
	I1227 10:24:50.697823  478121 kubeadm.go:319] 
	I1227 10:24:50.697879  478121 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:24:50.697918  478121 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:24:50.698026  478121 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:24:50.698034  478121 kubeadm.go:319] 
	I1227 10:24:50.698138  478121 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:24:50.698174  478121 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:24:50.698209  478121 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1227 10:24:50.698340  478121 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-915850 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-915850 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001046036s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:24:50.698433  478121 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1227 10:24:50.698709  478121 kubeadm.go:319] 
	I1227 10:24:51.167170  478121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:24:51.185518  478121 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:24:51.185583  478121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:24:51.197163  478121 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:24:51.197188  478121 kubeadm.go:158] found existing configuration files:
	
	I1227 10:24:51.197242  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:24:51.207119  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:24:51.207184  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:24:51.215621  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:24:51.225127  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:24:51.225192  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:24:51.235213  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:24:51.245042  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:24:51.245107  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:24:51.253351  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:24:51.262229  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:24:51.262288  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:24:51.270965  478121 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:24:51.466154  478121 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:24:51.466578  478121 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:24:51.549199  478121 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:24:51.814002  489746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:24:51.860181  489746 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:24:51.860387  489746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:24:51.972731  489746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:24:51.95210042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:24:51.972841  489746 docker.go:319] overlay module found
	I1227 10:24:51.976084  489746 out.go:179] * Using the docker driver based on existing profile
	I1227 10:24:51.978922  489746 start.go:309] selected driver: docker
	I1227 10:24:51.978941  489746 start.go:928] validating driver "docker" against &{Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:24:51.979048  489746 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:24:51.979759  489746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:24:52.075233  489746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:24:52.065007288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:24:52.075598  489746 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:24:52.075637  489746 cni.go:84] Creating CNI manager for ""
	I1227 10:24:52.075694  489746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:24:52.075736  489746 start.go:353] cluster config:
	{Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:24:52.079032  489746 out.go:179] * Starting "old-k8s-version-482317" primary control-plane node in "old-k8s-version-482317" cluster
	I1227 10:24:52.081900  489746 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:24:52.084698  489746 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:24:52.087559  489746 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:24:52.087609  489746 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:24:52.087625  489746 cache.go:65] Caching tarball of preloaded images
	I1227 10:24:52.087714  489746 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:24:52.087730  489746 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 10:24:52.087861  489746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/config.json ...
	I1227 10:24:52.088133  489746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:24:52.137623  489746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:24:52.137650  489746 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:24:52.137667  489746 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:24:52.137699  489746 start.go:360] acquireMachinesLock for old-k8s-version-482317: {Name:mk4c0cd3041b29cfcb95b36c1e5eae64b45ad166 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:24:52.137754  489746 start.go:364] duration metric: took 37.933µs to acquireMachinesLock for "old-k8s-version-482317"
	I1227 10:24:52.137790  489746 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:24:52.137800  489746 fix.go:54] fixHost starting: 
	I1227 10:24:52.138066  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:52.163283  489746 fix.go:112] recreateIfNeeded on old-k8s-version-482317: state=Stopped err=<nil>
	W1227 10:24:52.163325  489746 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:24:52.166620  489746 out.go:252] * Restarting existing docker container for "old-k8s-version-482317" ...
	I1227 10:24:52.166731  489746 cli_runner.go:164] Run: docker start old-k8s-version-482317
	I1227 10:24:52.538968  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:52.567284  489746 kic.go:430] container "old-k8s-version-482317" state is running.
	I1227 10:24:52.567681  489746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-482317
	I1227 10:24:52.598766  489746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/config.json ...
	I1227 10:24:52.598991  489746 machine.go:94] provisionDockerMachine start ...
	I1227 10:24:52.599058  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:52.621968  489746 main.go:144] libmachine: Using SSH client type: native
	I1227 10:24:52.622297  489746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 10:24:52.622313  489746 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:24:52.625954  489746 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:24:55.771800  489746 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-482317
	
	I1227 10:24:55.771824  489746 ubuntu.go:182] provisioning hostname "old-k8s-version-482317"
	I1227 10:24:55.771897  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:55.790784  489746 main.go:144] libmachine: Using SSH client type: native
	I1227 10:24:55.791115  489746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 10:24:55.791130  489746 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-482317 && echo "old-k8s-version-482317" | sudo tee /etc/hostname
	I1227 10:24:55.941808  489746 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-482317
	
	I1227 10:24:55.941890  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:55.960768  489746 main.go:144] libmachine: Using SSH client type: native
	I1227 10:24:55.961091  489746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 10:24:55.961107  489746 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-482317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-482317/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-482317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:24:56.104430  489746 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:24:56.104457  489746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:24:56.104480  489746 ubuntu.go:190] setting up certificates
	I1227 10:24:56.104489  489746 provision.go:84] configureAuth start
	I1227 10:24:56.104553  489746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-482317
	I1227 10:24:56.121153  489746 provision.go:143] copyHostCerts
	I1227 10:24:56.121223  489746 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:24:56.121243  489746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:24:56.121319  489746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:24:56.121442  489746 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:24:56.121454  489746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:24:56.121483  489746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:24:56.121552  489746 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:24:56.121559  489746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:24:56.121585  489746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:24:56.121647  489746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-482317 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-482317]
	I1227 10:24:56.186274  489746 provision.go:177] copyRemoteCerts
	I1227 10:24:56.186346  489746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:24:56.186391  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:56.203103  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:56.300368  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1227 10:24:56.318174  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:24:56.335865  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:24:56.352664  489746 provision.go:87] duration metric: took 248.153833ms to configureAuth
	I1227 10:24:56.352690  489746 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:24:56.352884  489746 config.go:182] Loaded profile config "old-k8s-version-482317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:24:56.352994  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:56.370932  489746 main.go:144] libmachine: Using SSH client type: native
	I1227 10:24:56.371244  489746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 10:24:56.371259  489746 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:24:56.704622  489746 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:24:56.704649  489746 machine.go:97] duration metric: took 4.105638811s to provisionDockerMachine
	I1227 10:24:56.704661  489746 start.go:293] postStartSetup for "old-k8s-version-482317" (driver="docker")
	I1227 10:24:56.704672  489746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:24:56.704733  489746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:24:56.704779  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:56.725302  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:56.824217  489746 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:24:56.827944  489746 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:24:56.828000  489746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:24:56.828014  489746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:24:56.828075  489746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:24:56.828170  489746 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:24:56.828279  489746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:24:56.835849  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:24:56.853953  489746 start.go:296] duration metric: took 149.275092ms for postStartSetup
	I1227 10:24:56.854037  489746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:24:56.854077  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:56.871463  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:56.969092  489746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:24:56.973745  489746 fix.go:56] duration metric: took 4.835928866s for fixHost
	I1227 10:24:56.973772  489746 start.go:83] releasing machines lock for "old-k8s-version-482317", held for 4.836002622s
	I1227 10:24:56.973847  489746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-482317
	I1227 10:24:56.990908  489746 ssh_runner.go:195] Run: cat /version.json
	I1227 10:24:56.990970  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:56.991043  489746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:24:56.991100  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:57.010026  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:57.013009  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:57.108064  489746 ssh_runner.go:195] Run: systemctl --version
	I1227 10:24:57.212637  489746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:24:57.258488  489746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:24:57.265110  489746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:24:57.265186  489746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:24:57.274735  489746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:24:57.274780  489746 start.go:496] detecting cgroup driver to use...
	I1227 10:24:57.274813  489746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:24:57.274881  489746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:24:57.290667  489746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:24:57.304653  489746 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:24:57.304774  489746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:24:57.321002  489746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:24:57.334602  489746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:24:57.445726  489746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:24:57.564365  489746 docker.go:234] disabling docker service ...
	I1227 10:24:57.564444  489746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:24:57.579399  489746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:24:57.592804  489746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:24:57.712081  489746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:24:57.832327  489746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:24:57.845047  489746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:24:57.860742  489746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1227 10:24:57.860821  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.870040  489746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:24:57.870111  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.879480  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.888346  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.897080  489746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:24:57.905049  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.913912  489746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.922140  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.931263  489746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:24:57.939067  489746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:24:57.946402  489746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:24:58.092764  489746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:24:58.280500  489746 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:24:58.280585  489746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:24:58.284715  489746 start.go:574] Will wait 60s for crictl version
	I1227 10:24:58.284791  489746 ssh_runner.go:195] Run: which crictl
	I1227 10:24:58.288406  489746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:24:58.317551  489746 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:24:58.317655  489746 ssh_runner.go:195] Run: crio --version
	I1227 10:24:58.347904  489746 ssh_runner.go:195] Run: crio --version
	I1227 10:24:58.382907  489746 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1227 10:24:58.385969  489746 cli_runner.go:164] Run: docker network inspect old-k8s-version-482317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:24:58.401653  489746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:24:58.405400  489746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:24:58.414779  489746 kubeadm.go:884] updating cluster {Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:24:58.414906  489746 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:24:58.414958  489746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:24:58.447902  489746 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:24:58.447923  489746 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:24:58.448023  489746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:24:58.473419  489746 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:24:58.473492  489746 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:24:58.473521  489746 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1227 10:24:58.473666  489746 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-482317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:24:58.473790  489746 ssh_runner.go:195] Run: crio config
	I1227 10:24:58.544966  489746 cni.go:84] Creating CNI manager for ""
	I1227 10:24:58.544988  489746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:24:58.545005  489746 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:24:58.545029  489746 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-482317 NodeName:old-k8s-version-482317 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:24:58.545172  489746 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-482317"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:24:58.545501  489746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1227 10:24:58.556036  489746 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:24:58.556115  489746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:24:58.563603  489746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1227 10:24:58.576444  489746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:24:58.589450  489746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1227 10:24:58.602252  489746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:24:58.605880  489746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:24:58.615389  489746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:24:58.730028  489746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:24:58.753170  489746 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317 for IP: 192.168.76.2
	I1227 10:24:58.753196  489746 certs.go:195] generating shared ca certs ...
	I1227 10:24:58.753213  489746 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:24:58.753362  489746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:24:58.753416  489746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:24:58.753430  489746 certs.go:257] generating profile certs ...
	I1227 10:24:58.753516  489746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.key
	I1227 10:24:58.753587  489746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.key.76d9b417
	I1227 10:24:58.753634  489746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.key
	I1227 10:24:58.753760  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:24:58.753798  489746 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:24:58.753812  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:24:58.753846  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:24:58.753875  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:24:58.753904  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:24:58.753951  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:24:58.754561  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:24:58.782150  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:24:58.802805  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:24:58.820876  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:24:58.843005  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 10:24:58.865404  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:24:58.883788  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:24:58.907181  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:24:58.927264  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:24:58.947620  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:24:58.980763  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:24:59.001510  489746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:24:59.016673  489746 ssh_runner.go:195] Run: openssl version
	I1227 10:24:59.023173  489746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:24:59.030908  489746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:24:59.038918  489746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:24:59.042734  489746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:24:59.042867  489746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:24:59.085464  489746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:24:59.092720  489746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:24:59.100638  489746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:24:59.107921  489746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:24:59.111672  489746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:24:59.111766  489746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:24:59.154970  489746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:24:59.162392  489746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:24:59.169690  489746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:24:59.177198  489746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:24:59.180905  489746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:24:59.181016  489746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:24:59.222029  489746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:24:59.229331  489746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:24:59.232865  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:24:59.273675  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:24:59.314621  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:24:59.355764  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:24:59.403103  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:24:59.460709  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:24:59.562187  489746 kubeadm.go:401] StartCluster: {Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:24:59.562319  489746 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:24:59.562425  489746 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:24:59.636069  489746 cri.go:96] found id: "6af676216486829c31c72726886da4d5b9d2fdd5e03d47e9d092cd74c92823fd"
	I1227 10:24:59.636092  489746 cri.go:96] found id: "5c18700dae648beeb6cbc946e81f00349e9db29024a7fcb4389e4ebb5f3220e3"
	I1227 10:24:59.636097  489746 cri.go:96] found id: "7904f50147b3a49201ac12cc375f895cdfbd6570c8043be40e8f86a6040e4ba7"
	I1227 10:24:59.636101  489746 cri.go:96] found id: "edba935460de1d0d6cf628ac3e09f2ff27ad3160fda618c38a75071e4b54afcc"
	I1227 10:24:59.636114  489746 cri.go:96] found id: ""
	I1227 10:24:59.636167  489746 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:24:59.665752  489746 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:24:59Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:24:59.665826  489746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:24:59.673648  489746 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:24:59.673722  489746 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:24:59.673809  489746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:24:59.681651  489746 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:24:59.682146  489746 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-482317" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:24:59.682306  489746 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-482317" cluster setting kubeconfig missing "old-k8s-version-482317" context setting]
	I1227 10:24:59.682619  489746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:24:59.683907  489746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:24:59.692586  489746 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 10:24:59.692673  489746 kubeadm.go:602] duration metric: took 18.93102ms to restartPrimaryControlPlane
	I1227 10:24:59.692703  489746 kubeadm.go:403] duration metric: took 130.522101ms to StartCluster
	I1227 10:24:59.692748  489746 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:24:59.692852  489746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:24:59.693492  489746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:24:59.693981  489746 config.go:182] Loaded profile config "old-k8s-version-482317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:24:59.693772  489746 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:24:59.694109  489746 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:24:59.694199  489746 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-482317"
	I1227 10:24:59.694233  489746 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-482317"
	W1227 10:24:59.694267  489746 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:24:59.694310  489746 host.go:66] Checking if "old-k8s-version-482317" exists ...
	I1227 10:24:59.694835  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:59.695489  489746 addons.go:70] Setting dashboard=true in profile "old-k8s-version-482317"
	I1227 10:24:59.695521  489746 addons.go:239] Setting addon dashboard=true in "old-k8s-version-482317"
	W1227 10:24:59.695529  489746 addons.go:248] addon dashboard should already be in state true
	I1227 10:24:59.695554  489746 host.go:66] Checking if "old-k8s-version-482317" exists ...
	I1227 10:24:59.696050  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:59.699172  489746 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-482317"
	I1227 10:24:59.699206  489746 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-482317"
	I1227 10:24:59.700208  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:59.702583  489746 out.go:179] * Verifying Kubernetes components...
	I1227 10:24:59.705514  489746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:24:59.750725  489746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:24:59.754203  489746 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:24:59.754226  489746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:24:59.754294  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:59.759576  489746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:24:59.762911  489746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:24:59.768567  489746 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-482317"
	W1227 10:24:59.768589  489746 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:24:59.768615  489746 host.go:66] Checking if "old-k8s-version-482317" exists ...
	I1227 10:24:59.769027  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:59.774529  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:24:59.774606  489746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:24:59.774690  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:59.805644  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:59.822288  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:59.828855  489746 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:24:59.828878  489746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:24:59.828941  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:59.865655  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:25:00.030065  489746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:25:00.061894  489746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:25:00.101734  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:25:00.101823  489746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:25:00.105574  489746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:25:00.165106  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:25:00.165131  489746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:25:00.273467  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:25:00.273493  489746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:25:00.460512  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:25:00.460602  489746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:25:00.487042  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:25:00.487129  489746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:25:00.516496  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:25:00.516580  489746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:25:00.535924  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:25:00.536036  489746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:25:00.558406  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:25:00.558504  489746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:25:00.580927  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:25:00.581020  489746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:25:00.602564  489746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:25:06.399305  489746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.369144126s)
	I1227 10:25:06.399369  489746 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.337393958s)
	I1227 10:25:06.399403  489746 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-482317" to be "Ready" ...
	I1227 10:25:06.399735  489746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.294070793s)
	I1227 10:25:06.430855  489746 node_ready.go:49] node "old-k8s-version-482317" is "Ready"
	I1227 10:25:06.430888  489746 node_ready.go:38] duration metric: took 31.472167ms for node "old-k8s-version-482317" to be "Ready" ...
	I1227 10:25:06.430903  489746 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:25:06.431005  489746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:25:06.909965  489746 api_server.go:72] duration metric: took 7.215875637s to wait for apiserver process to appear ...
	I1227 10:25:06.909994  489746 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:25:06.910015  489746 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:25:06.910347  489746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.307669346s)
	I1227 10:25:06.913300  489746 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-482317 addons enable metrics-server
	
	I1227 10:25:06.916366  489746 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 10:25:06.919266  489746 addons.go:530] duration metric: took 7.225154713s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 10:25:06.920895  489746 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:25:06.922834  489746 api_server.go:141] control plane version: v1.28.0
	I1227 10:25:06.922886  489746 api_server.go:131] duration metric: took 12.884174ms to wait for apiserver health ...
	I1227 10:25:06.922896  489746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:25:06.929036  489746 system_pods.go:59] 8 kube-system pods found
	I1227 10:25:06.929156  489746 system_pods.go:61] "coredns-5dd5756b68-xtcrs" [a1ff47cc-238c-4217-8591-ff8b26b907da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:25:06.929203  489746 system_pods.go:61] "etcd-old-k8s-version-482317" [70dce620-1f12-49f9-8f70-ab1eb4c021eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:25:06.929230  489746 system_pods.go:61] "kindnet-4jvpn" [35d8c991-0977-4f5f-95d3-d06fdf9b1481] Running
	I1227 10:25:06.929258  489746 system_pods.go:61] "kube-apiserver-old-k8s-version-482317" [970f565c-b1c3-40cd-8165-f425b311a9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:25:06.929313  489746 system_pods.go:61] "kube-controller-manager-old-k8s-version-482317" [41aa78cd-9c7b-49f7-bcc1-e85c6d9d606e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:25:06.929349  489746 system_pods.go:61] "kube-proxy-gr6gq" [3a6b528b-199e-43a6-8a9b-f9157d3800a0] Running
	I1227 10:25:06.929378  489746 system_pods.go:61] "kube-scheduler-old-k8s-version-482317" [42afac7c-9449-4b76-b9d1-ef7655e77163] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:25:06.929409  489746 system_pods.go:61] "storage-provisioner" [0bd371c6-e3b4-4c0b-8a3a-f17eade42f06] Running
	I1227 10:25:06.929447  489746 system_pods.go:74] duration metric: took 6.542506ms to wait for pod list to return data ...
	I1227 10:25:06.929469  489746 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:25:06.932277  489746 default_sa.go:45] found service account: "default"
	I1227 10:25:06.932300  489746 default_sa.go:55] duration metric: took 2.811885ms for default service account to be created ...
	I1227 10:25:06.932310  489746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:25:06.936398  489746 system_pods.go:86] 8 kube-system pods found
	I1227 10:25:06.936430  489746 system_pods.go:89] "coredns-5dd5756b68-xtcrs" [a1ff47cc-238c-4217-8591-ff8b26b907da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:25:06.936440  489746 system_pods.go:89] "etcd-old-k8s-version-482317" [70dce620-1f12-49f9-8f70-ab1eb4c021eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:25:06.936446  489746 system_pods.go:89] "kindnet-4jvpn" [35d8c991-0977-4f5f-95d3-d06fdf9b1481] Running
	I1227 10:25:06.936453  489746 system_pods.go:89] "kube-apiserver-old-k8s-version-482317" [970f565c-b1c3-40cd-8165-f425b311a9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:25:06.936462  489746 system_pods.go:89] "kube-controller-manager-old-k8s-version-482317" [41aa78cd-9c7b-49f7-bcc1-e85c6d9d606e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:25:06.936467  489746 system_pods.go:89] "kube-proxy-gr6gq" [3a6b528b-199e-43a6-8a9b-f9157d3800a0] Running
	I1227 10:25:06.936475  489746 system_pods.go:89] "kube-scheduler-old-k8s-version-482317" [42afac7c-9449-4b76-b9d1-ef7655e77163] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:25:06.936480  489746 system_pods.go:89] "storage-provisioner" [0bd371c6-e3b4-4c0b-8a3a-f17eade42f06] Running
	I1227 10:25:06.936487  489746 system_pods.go:126] duration metric: took 4.171923ms to wait for k8s-apps to be running ...
	I1227 10:25:06.936494  489746 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:25:06.936555  489746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:25:06.952874  489746 system_svc.go:56] duration metric: took 16.368787ms WaitForService to wait for kubelet
	I1227 10:25:06.952946  489746 kubeadm.go:587] duration metric: took 7.25885937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:25:06.952980  489746 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:25:06.965037  489746 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:25:06.965110  489746 node_conditions.go:123] node cpu capacity is 2
	I1227 10:25:06.965138  489746 node_conditions.go:105] duration metric: took 12.138714ms to run NodePressure ...
	I1227 10:25:06.965164  489746 start.go:242] waiting for startup goroutines ...
	I1227 10:25:06.965208  489746 start.go:247] waiting for cluster config update ...
	I1227 10:25:06.965234  489746 start.go:256] writing updated cluster config ...
	I1227 10:25:06.965544  489746 ssh_runner.go:195] Run: rm -f paused
	I1227 10:25:06.975942  489746 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:25:06.981600  489746 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xtcrs" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:25:08.992878  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:11.488254  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:13.987561  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:15.989029  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:18.487524  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:20.489471  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:22.987713  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:24.992785  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:27.489662  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:29.988220  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:31.988505  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:33.991428  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:36.487846  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:38.488079  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:40.488446  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	I1227 10:25:42.488089  489746 pod_ready.go:94] pod "coredns-5dd5756b68-xtcrs" is "Ready"
	I1227 10:25:42.488117  489746 pod_ready.go:86] duration metric: took 35.506448155s for pod "coredns-5dd5756b68-xtcrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.491285  489746 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.496300  489746 pod_ready.go:94] pod "etcd-old-k8s-version-482317" is "Ready"
	I1227 10:25:42.496331  489746 pod_ready.go:86] duration metric: took 5.019324ms for pod "etcd-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.499372  489746 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.504231  489746 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-482317" is "Ready"
	I1227 10:25:42.504313  489746 pod_ready.go:86] duration metric: took 4.912968ms for pod "kube-apiserver-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.507499  489746 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.685945  489746 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-482317" is "Ready"
	I1227 10:25:42.685978  489746 pod_ready.go:86] duration metric: took 178.446671ms for pod "kube-controller-manager-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.887361  489746 pod_ready.go:83] waiting for pod "kube-proxy-gr6gq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:43.285835  489746 pod_ready.go:94] pod "kube-proxy-gr6gq" is "Ready"
	I1227 10:25:43.285902  489746 pod_ready.go:86] duration metric: took 398.501406ms for pod "kube-proxy-gr6gq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:43.487193  489746 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:43.885721  489746 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-482317" is "Ready"
	I1227 10:25:43.885753  489746 pod_ready.go:86] duration metric: took 398.527154ms for pod "kube-scheduler-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:43.885766  489746 pod_ready.go:40] duration metric: took 36.909721979s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:25:43.946172  489746 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1227 10:25:43.950013  489746 out.go:203] 
	W1227 10:25:43.953006  489746 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 10:25:43.955914  489746 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:25:43.958737  489746 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-482317" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.504411712Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4c837e2d-2dd8-41ce-9658-55d9877e808f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.505635158Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8a7cdfe5-11e6-47d4-b4bf-6531ca701217 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.50686091Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh/dashboard-metrics-scraper" id=2c6da427-14df-4168-8ea2-1b8d261fc2b2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.506981969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.517869365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.518629717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.540244063Z" level=info msg="Created container 85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh/dashboard-metrics-scraper" id=2c6da427-14df-4168-8ea2-1b8d261fc2b2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.541261673Z" level=info msg="Starting container: 85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1" id=45c8357a-8573-44e0-920b-c45961ee8203 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.54299746Z" level=info msg="Started container" PID=1643 containerID=85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh/dashboard-metrics-scraper id=45c8357a-8573-44e0-920b-c45961ee8203 name=/runtime.v1.RuntimeService/StartContainer sandboxID=63e3683ec95a7b8ae6a80b4bd5dcc788703fb643d932c73c2ef512853bd5ff97
	Dec 27 10:25:38 old-k8s-version-482317 conmon[1641]: conmon 85802b12b64fec4b7359 <ninfo>: container 1643 exited with status 1
	Dec 27 10:25:39 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:39.136437828Z" level=info msg="Removing container: 237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741" id=77c387d3-035a-4295-8aac-e75aea39eafc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:25:39 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:39.144716098Z" level=info msg="Error loading conmon cgroup of container 237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741: cgroup deleted" id=77c387d3-035a-4295-8aac-e75aea39eafc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:25:39 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:39.149847021Z" level=info msg="Removed container 237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh/dashboard-metrics-scraper" id=77c387d3-035a-4295-8aac-e75aea39eafc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.743026544Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.749236157Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.749276379Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.749305392Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.752485092Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.752644339Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.752714559Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.756195414Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.756355883Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.756396064Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.759538324Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.759574049Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	85802b12b64fe       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   63e3683ec95a7       dashboard-metrics-scraper-5f989dc9cf-fzrvh       kubernetes-dashboard
	7b0ce55a826e7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   c61e5b8b09a91       storage-provisioner                              kube-system
	0e925ff8e67ac       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago      Running             kubernetes-dashboard        0                   38a53d9a9816c       kubernetes-dashboard-8694d4445c-jnpvk            kubernetes-dashboard
	0579bd17b999c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           53 seconds ago      Running             coredns                     1                   611714da5e406       coredns-5dd5756b68-xtcrs                         kube-system
	bfc8e3ab07b62       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   10dc585babfbd       busybox                                          default
	9b09d87f39c51       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           53 seconds ago      Running             kube-proxy                  1                   3fb6051b0d5d1       kube-proxy-gr6gq                                 kube-system
	a6a43cacb933a       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   415013ca60da2       kindnet-4jvpn                                    kube-system
	42968e8e6aa87       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   c61e5b8b09a91       storage-provisioner                              kube-system
	6af6762164868       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           59 seconds ago      Running             etcd                        1                   1127ccd3e80e0       etcd-old-k8s-version-482317                      kube-system
	5c18700dae648       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           59 seconds ago      Running             kube-apiserver              1                   d42303fe7caf1       kube-apiserver-old-k8s-version-482317            kube-system
	7904f50147b3a       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           59 seconds ago      Running             kube-controller-manager     1                   1280f8f573eb8       kube-controller-manager-old-k8s-version-482317   kube-system
	edba935460de1       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           59 seconds ago      Running             kube-scheduler              1                   827bd65104e88       kube-scheduler-old-k8s-version-482317            kube-system
	
	
	==> coredns [0579bd17b999c40f300161843dca65348d880147d408e942833f0e8a1efa1b67] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48560 - 10077 "HINFO IN 7406498039858455344.8016397607005876646. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015039485s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-482317
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-482317
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=old-k8s-version-482317
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_23_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:23:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-482317
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:25:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:25:35 +0000   Sat, 27 Dec 2025 10:23:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:25:35 +0000   Sat, 27 Dec 2025 10:23:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:25:35 +0000   Sat, 27 Dec 2025 10:23:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:25:35 +0000   Sat, 27 Dec 2025 10:24:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-482317
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                a7aa5659-6104-4ad4-974f-9a450eb0c75f
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-xtcrs                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-482317                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-4jvpn                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-482317             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-482317    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-gr6gq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-482317             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fzrvh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jnpvk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-482317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-482317 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-482317 event: Registered Node old-k8s-version-482317 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-482317 status is now: NodeReady
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 60s)    kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 60s)    kubelet          Node old-k8s-version-482317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 60s)    kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-482317 event: Registered Node old-k8s-version-482317 in Controller
	
	
	==> dmesg <==
	[  +3.382865] overlayfs: idmapped layers are currently not supported
	[Dec27 09:53] overlayfs: idmapped layers are currently not supported
	[Dec27 09:57] overlayfs: idmapped layers are currently not supported
	[Dec27 09:58] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6af676216486829c31c72726886da4d5b9d2fdd5e03d47e9d092cd74c92823fd] <==
	{"level":"info","ts":"2025-12-27T10:25:00.080771Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:25:00.080783Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:25:00.081058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T10:25:00.081139Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-27T10:25:00.081245Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:25:00.081285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:25:00.102217Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T10:25:00.102643Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:25:00.102417Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:25:00.104494Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:25:00.117778Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:25:01.875577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:25:01.875711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:25:01.875764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:25:01.875803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:25:01.87584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:25:01.875881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:25:01.875914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:25:01.881892Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-482317 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:25:01.88207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:25:01.883089Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:25:01.887913Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:25:01.888912Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:25:01.894915Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:25:01.895009Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:25:59 up  2:08,  0 user,  load average: 1.50, 1.41, 1.80
	Linux old-k8s-version-482317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a6a43cacb933af66d20a2d7793c31b0b116cbea7d00c6ae9dceb483bf2f0b2bd] <==
	I1227 10:25:05.557697       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:25:05.557928       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:25:05.558058       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:25:05.558070       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:25:05.558079       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:25:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:25:05.742883       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:25:05.742963       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:25:05.742975       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:25:05.743933       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:25:35.743694       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:25:35.743694       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:25:35.743829       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 10:25:35.744030       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 10:25:37.143294       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:25:37.143395       1 metrics.go:72] Registering metrics
	I1227 10:25:37.143476       1 controller.go:711] "Syncing nftables rules"
	I1227 10:25:45.742660       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:25:45.742713       1 main.go:301] handling current node
	I1227 10:25:55.743219       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:25:55.743268       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5c18700dae648beeb6cbc946e81f00349e9db29024a7fcb4389e4ebb5f3220e3] <==
	I1227 10:25:04.667029       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1227 10:25:04.862372       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 10:25:04.874141       1 aggregator.go:166] initial CRD sync complete...
	I1227 10:25:04.874239       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 10:25:04.874270       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:25:04.874323       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:25:04.882645       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:25:04.945065       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 10:25:04.950717       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 10:25:04.950800       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 10:25:04.951849       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 10:25:04.953481       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 10:25:04.953557       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:25:04.972145       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 10:25:05.601958       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 10:25:06.717026       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 10:25:06.764398       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 10:25:06.790377       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:25:06.801350       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:25:06.817047       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 10:25:06.878037       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.217.9"}
	I1227 10:25:06.901293       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.27.29"}
	I1227 10:25:17.170667       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1227 10:25:17.202566       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 10:25:17.214678       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7904f50147b3a49201ac12cc375f895cdfbd6570c8043be40e8f86a6040e4ba7] <==
	I1227 10:25:17.265651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.022µs"
	I1227 10:25:17.273719       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-jnpvk"
	I1227 10:25:17.285061       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-fzrvh"
	I1227 10:25:17.295120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.196079ms"
	I1227 10:25:17.308222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.576472ms"
	I1227 10:25:17.325175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="29.879791ms"
	I1227 10:25:17.325404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="89.863µs"
	I1227 10:25:17.328196       1 shared_informer.go:318] Caches are synced for disruption
	I1227 10:25:17.339952       1 shared_informer.go:318] Caches are synced for persistent volume
	I1227 10:25:17.349985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="41.612445ms"
	I1227 10:25:17.350073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.432µs"
	I1227 10:25:17.378277       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 10:25:17.417210       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 10:25:17.721830       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:25:17.721948       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 10:25:17.748634       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:25:24.110500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.005846ms"
	I1227 10:25:24.110934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.112µs"
	I1227 10:25:28.110692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.233µs"
	I1227 10:25:29.119358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="90.848µs"
	I1227 10:25:30.120618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.867µs"
	I1227 10:25:39.152246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.952µs"
	I1227 10:25:42.111618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.283182ms"
	I1227 10:25:42.113044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="159.28µs"
	I1227 10:25:48.518893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.882µs"
	
	
	==> kube-proxy [9b09d87f39c5152bb435531967cef400dd6c3797b38f7965e024bc264e021c98] <==
	I1227 10:25:05.639394       1 server_others.go:69] "Using iptables proxy"
	I1227 10:25:05.680746       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1227 10:25:05.982514       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:25:05.985262       1 server_others.go:152] "Using iptables Proxier"
	I1227 10:25:05.985364       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 10:25:05.985450       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 10:25:05.987210       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 10:25:05.987661       1 server.go:846] "Version info" version="v1.28.0"
	I1227 10:25:05.987888       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:25:05.988653       1 config.go:188] "Starting service config controller"
	I1227 10:25:05.988725       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 10:25:05.988774       1 config.go:97] "Starting endpoint slice config controller"
	I1227 10:25:05.988801       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 10:25:05.989386       1 config.go:315] "Starting node config controller"
	I1227 10:25:05.989432       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 10:25:06.091632       1 shared_informer.go:318] Caches are synced for node config
	I1227 10:25:06.091662       1 shared_informer.go:318] Caches are synced for service config
	I1227 10:25:06.091688       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [edba935460de1d0d6cf628ac3e09f2ff27ad3160fda618c38a75071e4b54afcc] <==
	I1227 10:25:01.666678       1 serving.go:348] Generated self-signed cert in-memory
	W1227 10:25:04.836375       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:25:04.836472       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:25:04.836509       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:25:04.836540       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:25:04.899289       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1227 10:25:04.899691       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:25:04.901055       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:25:04.901127       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 10:25:04.902269       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1227 10:25:04.902342       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 10:25:05.004509       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 10:25:17 old-k8s-version-482317 kubelet[780]: I1227 10:25:17.301082     780 topology_manager.go:215] "Topology Admit Handler" podUID="e785d875-fcdd-4cd8-b425-45a6c5b06cca" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-fzrvh"
	Dec 27 10:25:17 old-k8s-version-482317 kubelet[780]: I1227 10:25:17.405950     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e785d875-fcdd-4cd8-b425-45a6c5b06cca-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-fzrvh\" (UID: \"e785d875-fcdd-4cd8-b425-45a6c5b06cca\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh"
	Dec 27 10:25:17 old-k8s-version-482317 kubelet[780]: I1227 10:25:17.406009     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6zpv\" (UniqueName: \"kubernetes.io/projected/e785d875-fcdd-4cd8-b425-45a6c5b06cca-kube-api-access-x6zpv\") pod \"dashboard-metrics-scraper-5f989dc9cf-fzrvh\" (UID: \"e785d875-fcdd-4cd8-b425-45a6c5b06cca\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh"
	Dec 27 10:25:17 old-k8s-version-482317 kubelet[780]: I1227 10:25:17.406038     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/15c15981-5af5-4212-be07-05f623f48f13-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-jnpvk\" (UID: \"15c15981-5af5-4212-be07-05f623f48f13\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jnpvk"
	Dec 27 10:25:17 old-k8s-version-482317 kubelet[780]: I1227 10:25:17.406089     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57tt4\" (UniqueName: \"kubernetes.io/projected/15c15981-5af5-4212-be07-05f623f48f13-kube-api-access-57tt4\") pod \"kubernetes-dashboard-8694d4445c-jnpvk\" (UID: \"15c15981-5af5-4212-be07-05f623f48f13\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jnpvk"
	Dec 27 10:25:18 old-k8s-version-482317 kubelet[780]: W1227 10:25:18.514723     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/crio-38a53d9a9816c177329ef78112d8c6977bac58ffff920adb70fd7e63f0594b61 WatchSource:0}: Error finding container 38a53d9a9816c177329ef78112d8c6977bac58ffff920adb70fd7e63f0594b61: Status 404 returned error can't find the container with id 38a53d9a9816c177329ef78112d8c6977bac58ffff920adb70fd7e63f0594b61
	Dec 27 10:25:18 old-k8s-version-482317 kubelet[780]: W1227 10:25:18.532686     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/crio-63e3683ec95a7b8ae6a80b4bd5dcc788703fb643d932c73c2ef512853bd5ff97 WatchSource:0}: Error finding container 63e3683ec95a7b8ae6a80b4bd5dcc788703fb643d932c73c2ef512853bd5ff97: Status 404 returned error can't find the container with id 63e3683ec95a7b8ae6a80b4bd5dcc788703fb643d932c73c2ef512853bd5ff97
	Dec 27 10:25:28 old-k8s-version-482317 kubelet[780]: I1227 10:25:28.093356     780 scope.go:117] "RemoveContainer" containerID="5893c8551354ae850ea22df641c42d4c8685dbac3f6630b58ba7eb4aa5775777"
	Dec 27 10:25:28 old-k8s-version-482317 kubelet[780]: I1227 10:25:28.112310     780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jnpvk" podStartSLOduration=6.590110553 podCreationTimestamp="2025-12-27 10:25:17 +0000 UTC" firstStartedPulling="2025-12-27 10:25:18.518262424 +0000 UTC m=+19.770837667" lastFinishedPulling="2025-12-27 10:25:23.040393511 +0000 UTC m=+24.292968754" observedRunningTime="2025-12-27 10:25:24.100891941 +0000 UTC m=+25.353467183" watchObservedRunningTime="2025-12-27 10:25:28.11224164 +0000 UTC m=+29.364816883"
	Dec 27 10:25:29 old-k8s-version-482317 kubelet[780]: I1227 10:25:29.097849     780 scope.go:117] "RemoveContainer" containerID="5893c8551354ae850ea22df641c42d4c8685dbac3f6630b58ba7eb4aa5775777"
	Dec 27 10:25:29 old-k8s-version-482317 kubelet[780]: I1227 10:25:29.098713     780 scope.go:117] "RemoveContainer" containerID="237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741"
	Dec 27 10:25:29 old-k8s-version-482317 kubelet[780]: E1227 10:25:29.099185     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fzrvh_kubernetes-dashboard(e785d875-fcdd-4cd8-b425-45a6c5b06cca)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh" podUID="e785d875-fcdd-4cd8-b425-45a6c5b06cca"
	Dec 27 10:25:30 old-k8s-version-482317 kubelet[780]: I1227 10:25:30.104344     780 scope.go:117] "RemoveContainer" containerID="237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741"
	Dec 27 10:25:30 old-k8s-version-482317 kubelet[780]: E1227 10:25:30.104664     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fzrvh_kubernetes-dashboard(e785d875-fcdd-4cd8-b425-45a6c5b06cca)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh" podUID="e785d875-fcdd-4cd8-b425-45a6c5b06cca"
	Dec 27 10:25:36 old-k8s-version-482317 kubelet[780]: I1227 10:25:36.120813     780 scope.go:117] "RemoveContainer" containerID="42968e8e6aa87735f51cd79fbe984e9063af3193c09105ee955eb81677f295b5"
	Dec 27 10:25:38 old-k8s-version-482317 kubelet[780]: I1227 10:25:38.503247     780 scope.go:117] "RemoveContainer" containerID="237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741"
	Dec 27 10:25:39 old-k8s-version-482317 kubelet[780]: I1227 10:25:39.130744     780 scope.go:117] "RemoveContainer" containerID="237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741"
	Dec 27 10:25:39 old-k8s-version-482317 kubelet[780]: I1227 10:25:39.130946     780 scope.go:117] "RemoveContainer" containerID="85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1"
	Dec 27 10:25:39 old-k8s-version-482317 kubelet[780]: E1227 10:25:39.131533     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fzrvh_kubernetes-dashboard(e785d875-fcdd-4cd8-b425-45a6c5b06cca)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh" podUID="e785d875-fcdd-4cd8-b425-45a6c5b06cca"
	Dec 27 10:25:48 old-k8s-version-482317 kubelet[780]: I1227 10:25:48.504081     780 scope.go:117] "RemoveContainer" containerID="85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1"
	Dec 27 10:25:48 old-k8s-version-482317 kubelet[780]: E1227 10:25:48.504958     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fzrvh_kubernetes-dashboard(e785d875-fcdd-4cd8-b425-45a6c5b06cca)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh" podUID="e785d875-fcdd-4cd8-b425-45a6c5b06cca"
	Dec 27 10:25:56 old-k8s-version-482317 kubelet[780]: I1227 10:25:56.157137     780 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 10:25:56 old-k8s-version-482317 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:25:56 old-k8s-version-482317 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:25:56 old-k8s-version-482317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0e925ff8e67acd4543b394e63d2b4c088abc3bdd579e1132f3c6096feceec216] <==
	2025/12/27 10:25:23 Starting overwatch
	2025/12/27 10:25:23 Using namespace: kubernetes-dashboard
	2025/12/27 10:25:23 Using in-cluster config to connect to apiserver
	2025/12/27 10:25:23 Using secret token for csrf signing
	2025/12/27 10:25:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:25:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:25:23 Successful initial request to the apiserver, version: v1.28.0
	2025/12/27 10:25:23 Generating JWE encryption key
	2025/12/27 10:25:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:25:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:25:23 Initializing JWE encryption key from synchronized object
	2025/12/27 10:25:23 Creating in-cluster Sidecar client
	2025/12/27 10:25:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:25:23 Serving insecurely on HTTP port: 9090
	2025/12/27 10:25:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [42968e8e6aa87735f51cd79fbe984e9063af3193c09105ee955eb81677f295b5] <==
	I1227 10:25:05.576516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:25:35.578833       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7b0ce55a826e7e76318fd3f47cc892955448f52ed3d924c467eb4effc59b9afa] <==
	I1227 10:25:36.169587       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:25:36.183584       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:25:36.183642       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 10:25:53.583422       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:25:53.583473       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a92e87e6-a9f2-4729-a034-2de7c1eae4b3", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-482317_ddffb137-5faf-42ed-be9f-be8d7290b420 became leader
	I1227 10:25:53.583708       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-482317_ddffb137-5faf-42ed-be9f-be8d7290b420!
	I1227 10:25:53.684010       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-482317_ddffb137-5faf-42ed-be9f-be8d7290b420!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-482317 -n old-k8s-version-482317
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-482317 -n old-k8s-version-482317: exit status 2 (353.456404ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-482317 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-482317
helpers_test.go:244: (dbg) docker inspect old-k8s-version-482317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb",
	        "Created": "2025-12-27T10:23:35.1004286Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:24:52.20848896Z",
	            "FinishedAt": "2025-12-27T10:24:51.124394197Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/hosts",
	        "LogPath": "/var/lib/docker/containers/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb-json.log",
	        "Name": "/old-k8s-version-482317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-482317:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-482317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb",
	                "LowerDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/773aceedf288702b018e402eb07d7340ae6560844c0803ed5c805c5032285c01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-482317",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-482317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-482317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-482317",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-482317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "197e095a150270c4048f6a2ed45438acfce8260cd4a021112dc8d3040f28eae8",
	            "SandboxKey": "/var/run/docker/netns/197e095a1502",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-482317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:79:ae:9b:4e:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76ec4721253c21d528fd72ab4bdb6e7b5be9293e371f48ba721c982435ec2193",
	                    "EndpointID": "d8adba187b1225efb8482c6c4e589720577d8ff4456cc59312aa24abd9af1eff",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-482317",
	                        "d3ed077d2566"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482317 -n old-k8s-version-482317
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482317 -n old-k8s-version-482317: exit status 2 (646.333485ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-482317 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-482317 logs -n 25: (1.297221074s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-785247 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo containerd config dump                                                                                                                                                                                                  │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo crio config                                                                                                                                                                                                             │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ delete  │ -p cilium-785247                                                                                                                                                                                                                              │ cilium-785247             │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:17 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ delete  │ -p cert-expiration-528820                                                                                                                                                                                                                     │ cert-expiration-528820    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ start   │ -p force-systemd-flag-915850 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-915850 │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │                     │
	│ delete  │ -p force-systemd-env-193016                                                                                                                                                                                                                   │ force-systemd-env-193016  │ jenkins │ v1.37.0 │ 27 Dec 25 10:22 UTC │ 27 Dec 25 10:22 UTC │
	│ start   │ -p cert-options-810217 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ cert-options-810217 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ -p cert-options-810217 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ delete  │ -p cert-options-810217                                                                                                                                                                                                                        │ cert-options-810217       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │                     │
	│ stop    │ -p old-k8s-version-482317 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-482317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:25 UTC │
	│ image   │ old-k8s-version-482317 image list --format=json                                                                                                                                                                                               │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │ 27 Dec 25 10:25 UTC │
	│ pause   │ -p old-k8s-version-482317 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-482317    │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:24:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:24:51.776484  489746 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:24:51.776685  489746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:24:51.776711  489746 out.go:374] Setting ErrFile to fd 2...
	I1227 10:24:51.776732  489746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:24:51.777032  489746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:24:51.777455  489746 out.go:368] Setting JSON to false
	I1227 10:24:51.778376  489746 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7645,"bootTime":1766823447,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:24:51.778466  489746 start.go:143] virtualization:  
	I1227 10:24:51.783722  489746 out.go:179] * [old-k8s-version-482317] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:24:51.786967  489746 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:24:51.787045  489746 notify.go:221] Checking for updates...
	I1227 10:24:51.791553  489746 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:24:51.794607  489746 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:24:51.797617  489746 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:24:51.800666  489746 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:24:51.803721  489746 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:24:51.807244  489746 config.go:182] Loaded profile config "old-k8s-version-482317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:24:51.810897  489746 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I1227 10:24:50.696647  478121 out.go:252]   - Booting up control plane ...
	I1227 10:24:50.696778  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:24:50.696893  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:24:50.696978  478121 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:24:50.697112  478121 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:24:50.697242  478121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:24:50.697361  478121 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:24:50.697455  478121 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:24:50.697499  478121 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:24:50.697634  478121 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:24:50.697747  478121 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:24:50.697815  478121 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001046036s
	I1227 10:24:50.697823  478121 kubeadm.go:319] 
	I1227 10:24:50.697879  478121 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:24:50.697918  478121 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:24:50.698026  478121 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:24:50.698034  478121 kubeadm.go:319] 
	I1227 10:24:50.698138  478121 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:24:50.698174  478121 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:24:50.698209  478121 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1227 10:24:50.698340  478121 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-915850 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-915850 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001046036s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:24:50.698433  478121 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1227 10:24:50.698709  478121 kubeadm.go:319] 
	I1227 10:24:51.167170  478121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:24:51.185518  478121 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:24:51.185583  478121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:24:51.197163  478121 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:24:51.197188  478121 kubeadm.go:158] found existing configuration files:
	
	I1227 10:24:51.197242  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:24:51.207119  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:24:51.207184  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:24:51.215621  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:24:51.225127  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:24:51.225192  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:24:51.235213  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:24:51.245042  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:24:51.245107  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:24:51.253351  478121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:24:51.262229  478121 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:24:51.262288  478121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:24:51.270965  478121 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:24:51.466154  478121 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:24:51.466578  478121 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:24:51.549199  478121 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:24:51.814002  489746 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:24:51.860181  489746 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:24:51.860387  489746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:24:51.972731  489746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:24:51.95210042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:24:51.972841  489746 docker.go:319] overlay module found
	I1227 10:24:51.976084  489746 out.go:179] * Using the docker driver based on existing profile
	I1227 10:24:51.978922  489746 start.go:309] selected driver: docker
	I1227 10:24:51.978941  489746 start.go:928] validating driver "docker" against &{Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:24:51.979048  489746 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:24:51.979759  489746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:24:52.075233  489746 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:24:52.065007288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:24:52.075598  489746 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:24:52.075637  489746 cni.go:84] Creating CNI manager for ""
	I1227 10:24:52.075694  489746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:24:52.075736  489746 start.go:353] cluster config:
	{Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:24:52.079032  489746 out.go:179] * Starting "old-k8s-version-482317" primary control-plane node in "old-k8s-version-482317" cluster
	I1227 10:24:52.081900  489746 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:24:52.084698  489746 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:24:52.087559  489746 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:24:52.087609  489746 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:24:52.087625  489746 cache.go:65] Caching tarball of preloaded images
	I1227 10:24:52.087714  489746 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:24:52.087730  489746 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 10:24:52.087861  489746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/config.json ...
	I1227 10:24:52.088133  489746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:24:52.137623  489746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:24:52.137650  489746 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:24:52.137667  489746 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:24:52.137699  489746 start.go:360] acquireMachinesLock for old-k8s-version-482317: {Name:mk4c0cd3041b29cfcb95b36c1e5eae64b45ad166 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:24:52.137754  489746 start.go:364] duration metric: took 37.933µs to acquireMachinesLock for "old-k8s-version-482317"
	I1227 10:24:52.137790  489746 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:24:52.137800  489746 fix.go:54] fixHost starting: 
	I1227 10:24:52.138066  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:52.163283  489746 fix.go:112] recreateIfNeeded on old-k8s-version-482317: state=Stopped err=<nil>
	W1227 10:24:52.163325  489746 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:24:52.166620  489746 out.go:252] * Restarting existing docker container for "old-k8s-version-482317" ...
	I1227 10:24:52.166731  489746 cli_runner.go:164] Run: docker start old-k8s-version-482317
	I1227 10:24:52.538968  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:52.567284  489746 kic.go:430] container "old-k8s-version-482317" state is running.
	I1227 10:24:52.567681  489746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-482317
	I1227 10:24:52.598766  489746 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/config.json ...
	I1227 10:24:52.598991  489746 machine.go:94] provisionDockerMachine start ...
	I1227 10:24:52.599058  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:52.621968  489746 main.go:144] libmachine: Using SSH client type: native
	I1227 10:24:52.622297  489746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 10:24:52.622313  489746 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:24:52.625954  489746 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:24:55.771800  489746 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-482317
	
	I1227 10:24:55.771824  489746 ubuntu.go:182] provisioning hostname "old-k8s-version-482317"
	I1227 10:24:55.771897  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:55.790784  489746 main.go:144] libmachine: Using SSH client type: native
	I1227 10:24:55.791115  489746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 10:24:55.791130  489746 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-482317 && echo "old-k8s-version-482317" | sudo tee /etc/hostname
	I1227 10:24:55.941808  489746 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-482317
	
	I1227 10:24:55.941890  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:55.960768  489746 main.go:144] libmachine: Using SSH client type: native
	I1227 10:24:55.961091  489746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 10:24:55.961107  489746 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-482317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-482317/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-482317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:24:56.104430  489746 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:24:56.104457  489746 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:24:56.104480  489746 ubuntu.go:190] setting up certificates
	I1227 10:24:56.104489  489746 provision.go:84] configureAuth start
	I1227 10:24:56.104553  489746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-482317
	I1227 10:24:56.121153  489746 provision.go:143] copyHostCerts
	I1227 10:24:56.121223  489746 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:24:56.121243  489746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:24:56.121319  489746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:24:56.121442  489746 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:24:56.121454  489746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:24:56.121483  489746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:24:56.121552  489746 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:24:56.121559  489746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:24:56.121585  489746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:24:56.121647  489746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-482317 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-482317]
	I1227 10:24:56.186274  489746 provision.go:177] copyRemoteCerts
	I1227 10:24:56.186346  489746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:24:56.186391  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:56.203103  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:56.300368  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1227 10:24:56.318174  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:24:56.335865  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:24:56.352664  489746 provision.go:87] duration metric: took 248.153833ms to configureAuth
	I1227 10:24:56.352690  489746 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:24:56.352884  489746 config.go:182] Loaded profile config "old-k8s-version-482317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:24:56.352994  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:56.370932  489746 main.go:144] libmachine: Using SSH client type: native
	I1227 10:24:56.371244  489746 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1227 10:24:56.371259  489746 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:24:56.704622  489746 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:24:56.704649  489746 machine.go:97] duration metric: took 4.105638811s to provisionDockerMachine
	I1227 10:24:56.704661  489746 start.go:293] postStartSetup for "old-k8s-version-482317" (driver="docker")
	I1227 10:24:56.704672  489746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:24:56.704733  489746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:24:56.704779  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:56.725302  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:56.824217  489746 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:24:56.827944  489746 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:24:56.828000  489746 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:24:56.828014  489746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:24:56.828075  489746 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:24:56.828170  489746 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:24:56.828279  489746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:24:56.835849  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:24:56.853953  489746 start.go:296] duration metric: took 149.275092ms for postStartSetup
	I1227 10:24:56.854037  489746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:24:56.854077  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:56.871463  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:56.969092  489746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:24:56.973745  489746 fix.go:56] duration metric: took 4.835928866s for fixHost
	I1227 10:24:56.973772  489746 start.go:83] releasing machines lock for "old-k8s-version-482317", held for 4.836002622s
	I1227 10:24:56.973847  489746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-482317
	I1227 10:24:56.990908  489746 ssh_runner.go:195] Run: cat /version.json
	I1227 10:24:56.990970  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:56.991043  489746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:24:56.991100  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:57.010026  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:57.013009  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:57.108064  489746 ssh_runner.go:195] Run: systemctl --version
	I1227 10:24:57.212637  489746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:24:57.258488  489746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:24:57.265110  489746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:24:57.265186  489746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:24:57.274735  489746 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:24:57.274780  489746 start.go:496] detecting cgroup driver to use...
	I1227 10:24:57.274813  489746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:24:57.274881  489746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:24:57.290667  489746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:24:57.304653  489746 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:24:57.304774  489746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:24:57.321002  489746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:24:57.334602  489746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:24:57.445726  489746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:24:57.564365  489746 docker.go:234] disabling docker service ...
	I1227 10:24:57.564444  489746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:24:57.579399  489746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:24:57.592804  489746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:24:57.712081  489746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:24:57.832327  489746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:24:57.845047  489746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:24:57.860742  489746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1227 10:24:57.860821  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.870040  489746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:24:57.870111  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.879480  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.888346  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.897080  489746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:24:57.905049  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.913912  489746 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.922140  489746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:24:57.931263  489746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:24:57.939067  489746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:24:57.946402  489746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:24:58.092764  489746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:24:58.280500  489746 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:24:58.280585  489746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:24:58.284715  489746 start.go:574] Will wait 60s for crictl version
	I1227 10:24:58.284791  489746 ssh_runner.go:195] Run: which crictl
	I1227 10:24:58.288406  489746 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:24:58.317551  489746 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:24:58.317655  489746 ssh_runner.go:195] Run: crio --version
	I1227 10:24:58.347904  489746 ssh_runner.go:195] Run: crio --version
	I1227 10:24:58.382907  489746 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1227 10:24:58.385969  489746 cli_runner.go:164] Run: docker network inspect old-k8s-version-482317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:24:58.401653  489746 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:24:58.405400  489746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:24:58.414779  489746 kubeadm.go:884] updating cluster {Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:24:58.414906  489746 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 10:24:58.414958  489746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:24:58.447902  489746 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:24:58.447923  489746 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:24:58.448023  489746 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:24:58.473419  489746 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:24:58.473492  489746 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:24:58.473521  489746 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1227 10:24:58.473666  489746 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-482317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:24:58.473790  489746 ssh_runner.go:195] Run: crio config
	I1227 10:24:58.544966  489746 cni.go:84] Creating CNI manager for ""
	I1227 10:24:58.544988  489746 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:24:58.545005  489746 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:24:58.545029  489746 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-482317 NodeName:old-k8s-version-482317 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:24:58.545172  489746 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-482317"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:24:58.545501  489746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1227 10:24:58.556036  489746 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:24:58.556115  489746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:24:58.563603  489746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1227 10:24:58.576444  489746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:24:58.589450  489746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1227 10:24:58.602252  489746 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:24:58.605880  489746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:24:58.615389  489746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:24:58.730028  489746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:24:58.753170  489746 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317 for IP: 192.168.76.2
	I1227 10:24:58.753196  489746 certs.go:195] generating shared ca certs ...
	I1227 10:24:58.753213  489746 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:24:58.753362  489746 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:24:58.753416  489746 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:24:58.753430  489746 certs.go:257] generating profile certs ...
	I1227 10:24:58.753516  489746 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.key
	I1227 10:24:58.753587  489746 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.key.76d9b417
	I1227 10:24:58.753634  489746 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.key
	I1227 10:24:58.753760  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:24:58.753798  489746 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:24:58.753812  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:24:58.753846  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:24:58.753875  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:24:58.753904  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:24:58.753951  489746 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:24:58.754561  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:24:58.782150  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:24:58.802805  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:24:58.820876  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:24:58.843005  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1227 10:24:58.865404  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:24:58.883788  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:24:58.907181  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:24:58.927264  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:24:58.947620  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:24:58.980763  489746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:24:59.001510  489746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:24:59.016673  489746 ssh_runner.go:195] Run: openssl version
	I1227 10:24:59.023173  489746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:24:59.030908  489746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:24:59.038918  489746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:24:59.042734  489746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:24:59.042867  489746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:24:59.085464  489746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:24:59.092720  489746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:24:59.100638  489746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:24:59.107921  489746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:24:59.111672  489746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:24:59.111766  489746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:24:59.154970  489746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:24:59.162392  489746 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:24:59.169690  489746 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:24:59.177198  489746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:24:59.180905  489746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:24:59.181016  489746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:24:59.222029  489746 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:24:59.229331  489746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:24:59.232865  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:24:59.273675  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:24:59.314621  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:24:59.355764  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:24:59.403103  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:24:59.460709  489746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:24:59.562187  489746 kubeadm.go:401] StartCluster: {Name:old-k8s-version-482317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-482317 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:24:59.562319  489746 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:24:59.562425  489746 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:24:59.636069  489746 cri.go:96] found id: "6af676216486829c31c72726886da4d5b9d2fdd5e03d47e9d092cd74c92823fd"
	I1227 10:24:59.636092  489746 cri.go:96] found id: "5c18700dae648beeb6cbc946e81f00349e9db29024a7fcb4389e4ebb5f3220e3"
	I1227 10:24:59.636097  489746 cri.go:96] found id: "7904f50147b3a49201ac12cc375f895cdfbd6570c8043be40e8f86a6040e4ba7"
	I1227 10:24:59.636101  489746 cri.go:96] found id: "edba935460de1d0d6cf628ac3e09f2ff27ad3160fda618c38a75071e4b54afcc"
	I1227 10:24:59.636114  489746 cri.go:96] found id: ""
	I1227 10:24:59.636167  489746 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:24:59.665752  489746 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:24:59Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:24:59.665826  489746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:24:59.673648  489746 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:24:59.673722  489746 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:24:59.673809  489746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:24:59.681651  489746 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:24:59.682146  489746 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-482317" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:24:59.682306  489746 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-482317" cluster setting kubeconfig missing "old-k8s-version-482317" context setting]
	I1227 10:24:59.682619  489746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:24:59.683907  489746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:24:59.692586  489746 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 10:24:59.692673  489746 kubeadm.go:602] duration metric: took 18.93102ms to restartPrimaryControlPlane
	I1227 10:24:59.692703  489746 kubeadm.go:403] duration metric: took 130.522101ms to StartCluster
	I1227 10:24:59.692748  489746 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:24:59.692852  489746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:24:59.693492  489746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:24:59.693981  489746 config.go:182] Loaded profile config "old-k8s-version-482317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1227 10:24:59.693772  489746 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:24:59.694109  489746 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:24:59.694199  489746 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-482317"
	I1227 10:24:59.694233  489746 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-482317"
	W1227 10:24:59.694267  489746 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:24:59.694310  489746 host.go:66] Checking if "old-k8s-version-482317" exists ...
	I1227 10:24:59.694835  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:59.695489  489746 addons.go:70] Setting dashboard=true in profile "old-k8s-version-482317"
	I1227 10:24:59.695521  489746 addons.go:239] Setting addon dashboard=true in "old-k8s-version-482317"
	W1227 10:24:59.695529  489746 addons.go:248] addon dashboard should already be in state true
	I1227 10:24:59.695554  489746 host.go:66] Checking if "old-k8s-version-482317" exists ...
	I1227 10:24:59.696050  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:59.699172  489746 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-482317"
	I1227 10:24:59.699206  489746 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-482317"
	I1227 10:24:59.700208  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:59.702583  489746 out.go:179] * Verifying Kubernetes components...
	I1227 10:24:59.705514  489746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:24:59.750725  489746 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:24:59.754203  489746 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:24:59.754226  489746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:24:59.754294  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:59.759576  489746 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:24:59.762911  489746 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:24:59.768567  489746 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-482317"
	W1227 10:24:59.768589  489746 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:24:59.768615  489746 host.go:66] Checking if "old-k8s-version-482317" exists ...
	I1227 10:24:59.769027  489746 cli_runner.go:164] Run: docker container inspect old-k8s-version-482317 --format={{.State.Status}}
	I1227 10:24:59.774529  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:24:59.774606  489746 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:24:59.774690  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:59.805644  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:59.822288  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:24:59.828855  489746 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:24:59.828878  489746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:24:59.828941  489746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-482317
	I1227 10:24:59.865655  489746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/old-k8s-version-482317/id_rsa Username:docker}
	I1227 10:25:00.030065  489746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:25:00.061894  489746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:25:00.101734  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:25:00.101823  489746 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:25:00.105574  489746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:25:00.165106  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:25:00.165131  489746 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:25:00.273467  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:25:00.273493  489746 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:25:00.460512  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:25:00.460602  489746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:25:00.487042  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:25:00.487129  489746 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:25:00.516496  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:25:00.516580  489746 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:25:00.535924  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:25:00.536036  489746 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:25:00.558406  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:25:00.558504  489746 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:25:00.580927  489746 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:25:00.581020  489746 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:25:00.602564  489746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:25:06.399305  489746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.369144126s)
	I1227 10:25:06.399369  489746 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.337393958s)
	I1227 10:25:06.399403  489746 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-482317" to be "Ready" ...
	I1227 10:25:06.399735  489746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.294070793s)
	I1227 10:25:06.430855  489746 node_ready.go:49] node "old-k8s-version-482317" is "Ready"
	I1227 10:25:06.430888  489746 node_ready.go:38] duration metric: took 31.472167ms for node "old-k8s-version-482317" to be "Ready" ...
	I1227 10:25:06.430903  489746 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:25:06.431005  489746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:25:06.909965  489746 api_server.go:72] duration metric: took 7.215875637s to wait for apiserver process to appear ...
	I1227 10:25:06.909994  489746 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:25:06.910015  489746 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:25:06.910347  489746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.307669346s)
	I1227 10:25:06.913300  489746 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-482317 addons enable metrics-server
	
	I1227 10:25:06.916366  489746 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1227 10:25:06.919266  489746 addons.go:530] duration metric: took 7.225154713s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 10:25:06.920895  489746 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:25:06.922834  489746 api_server.go:141] control plane version: v1.28.0
	I1227 10:25:06.922886  489746 api_server.go:131] duration metric: took 12.884174ms to wait for apiserver health ...
	I1227 10:25:06.922896  489746 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:25:06.929036  489746 system_pods.go:59] 8 kube-system pods found
	I1227 10:25:06.929156  489746 system_pods.go:61] "coredns-5dd5756b68-xtcrs" [a1ff47cc-238c-4217-8591-ff8b26b907da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:25:06.929203  489746 system_pods.go:61] "etcd-old-k8s-version-482317" [70dce620-1f12-49f9-8f70-ab1eb4c021eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:25:06.929230  489746 system_pods.go:61] "kindnet-4jvpn" [35d8c991-0977-4f5f-95d3-d06fdf9b1481] Running
	I1227 10:25:06.929258  489746 system_pods.go:61] "kube-apiserver-old-k8s-version-482317" [970f565c-b1c3-40cd-8165-f425b311a9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:25:06.929313  489746 system_pods.go:61] "kube-controller-manager-old-k8s-version-482317" [41aa78cd-9c7b-49f7-bcc1-e85c6d9d606e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:25:06.929349  489746 system_pods.go:61] "kube-proxy-gr6gq" [3a6b528b-199e-43a6-8a9b-f9157d3800a0] Running
	I1227 10:25:06.929378  489746 system_pods.go:61] "kube-scheduler-old-k8s-version-482317" [42afac7c-9449-4b76-b9d1-ef7655e77163] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:25:06.929409  489746 system_pods.go:61] "storage-provisioner" [0bd371c6-e3b4-4c0b-8a3a-f17eade42f06] Running
	I1227 10:25:06.929447  489746 system_pods.go:74] duration metric: took 6.542506ms to wait for pod list to return data ...
	I1227 10:25:06.929469  489746 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:25:06.932277  489746 default_sa.go:45] found service account: "default"
	I1227 10:25:06.932300  489746 default_sa.go:55] duration metric: took 2.811885ms for default service account to be created ...
	I1227 10:25:06.932310  489746 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:25:06.936398  489746 system_pods.go:86] 8 kube-system pods found
	I1227 10:25:06.936430  489746 system_pods.go:89] "coredns-5dd5756b68-xtcrs" [a1ff47cc-238c-4217-8591-ff8b26b907da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:25:06.936440  489746 system_pods.go:89] "etcd-old-k8s-version-482317" [70dce620-1f12-49f9-8f70-ab1eb4c021eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:25:06.936446  489746 system_pods.go:89] "kindnet-4jvpn" [35d8c991-0977-4f5f-95d3-d06fdf9b1481] Running
	I1227 10:25:06.936453  489746 system_pods.go:89] "kube-apiserver-old-k8s-version-482317" [970f565c-b1c3-40cd-8165-f425b311a9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:25:06.936462  489746 system_pods.go:89] "kube-controller-manager-old-k8s-version-482317" [41aa78cd-9c7b-49f7-bcc1-e85c6d9d606e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:25:06.936467  489746 system_pods.go:89] "kube-proxy-gr6gq" [3a6b528b-199e-43a6-8a9b-f9157d3800a0] Running
	I1227 10:25:06.936475  489746 system_pods.go:89] "kube-scheduler-old-k8s-version-482317" [42afac7c-9449-4b76-b9d1-ef7655e77163] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:25:06.936480  489746 system_pods.go:89] "storage-provisioner" [0bd371c6-e3b4-4c0b-8a3a-f17eade42f06] Running
	I1227 10:25:06.936487  489746 system_pods.go:126] duration metric: took 4.171923ms to wait for k8s-apps to be running ...
	I1227 10:25:06.936494  489746 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:25:06.936555  489746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:25:06.952874  489746 system_svc.go:56] duration metric: took 16.368787ms WaitForService to wait for kubelet
	I1227 10:25:06.952946  489746 kubeadm.go:587] duration metric: took 7.25885937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:25:06.952980  489746 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:25:06.965037  489746 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:25:06.965110  489746 node_conditions.go:123] node cpu capacity is 2
	I1227 10:25:06.965138  489746 node_conditions.go:105] duration metric: took 12.138714ms to run NodePressure ...
	I1227 10:25:06.965164  489746 start.go:242] waiting for startup goroutines ...
	I1227 10:25:06.965208  489746 start.go:247] waiting for cluster config update ...
	I1227 10:25:06.965234  489746 start.go:256] writing updated cluster config ...
	I1227 10:25:06.965544  489746 ssh_runner.go:195] Run: rm -f paused
	I1227 10:25:06.975942  489746 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:25:06.981600  489746 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xtcrs" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:25:08.992878  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:11.488254  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:13.987561  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:15.989029  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:18.487524  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:20.489471  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:22.987713  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:24.992785  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:27.489662  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:29.988220  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:31.988505  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:33.991428  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:36.487846  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:38.488079  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	W1227 10:25:40.488446  489746 pod_ready.go:104] pod "coredns-5dd5756b68-xtcrs" is not "Ready", error: <nil>
	I1227 10:25:42.488089  489746 pod_ready.go:94] pod "coredns-5dd5756b68-xtcrs" is "Ready"
	I1227 10:25:42.488117  489746 pod_ready.go:86] duration metric: took 35.506448155s for pod "coredns-5dd5756b68-xtcrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.491285  489746 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.496300  489746 pod_ready.go:94] pod "etcd-old-k8s-version-482317" is "Ready"
	I1227 10:25:42.496331  489746 pod_ready.go:86] duration metric: took 5.019324ms for pod "etcd-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.499372  489746 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.504231  489746 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-482317" is "Ready"
	I1227 10:25:42.504313  489746 pod_ready.go:86] duration metric: took 4.912968ms for pod "kube-apiserver-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.507499  489746 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.685945  489746 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-482317" is "Ready"
	I1227 10:25:42.685978  489746 pod_ready.go:86] duration metric: took 178.446671ms for pod "kube-controller-manager-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:42.887361  489746 pod_ready.go:83] waiting for pod "kube-proxy-gr6gq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:43.285835  489746 pod_ready.go:94] pod "kube-proxy-gr6gq" is "Ready"
	I1227 10:25:43.285902  489746 pod_ready.go:86] duration metric: took 398.501406ms for pod "kube-proxy-gr6gq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:43.487193  489746 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:43.885721  489746 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-482317" is "Ready"
	I1227 10:25:43.885753  489746 pod_ready.go:86] duration metric: took 398.527154ms for pod "kube-scheduler-old-k8s-version-482317" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:25:43.885766  489746 pod_ready.go:40] duration metric: took 36.909721979s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:25:43.946172  489746 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1227 10:25:43.950013  489746 out.go:203] 
	W1227 10:25:43.953006  489746 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1227 10:25:43.955914  489746 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:25:43.958737  489746 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-482317" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.504411712Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4c837e2d-2dd8-41ce-9658-55d9877e808f name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.505635158Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8a7cdfe5-11e6-47d4-b4bf-6531ca701217 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.50686091Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh/dashboard-metrics-scraper" id=2c6da427-14df-4168-8ea2-1b8d261fc2b2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.506981969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.517869365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.518629717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.540244063Z" level=info msg="Created container 85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh/dashboard-metrics-scraper" id=2c6da427-14df-4168-8ea2-1b8d261fc2b2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.541261673Z" level=info msg="Starting container: 85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1" id=45c8357a-8573-44e0-920b-c45961ee8203 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:25:38 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:38.54299746Z" level=info msg="Started container" PID=1643 containerID=85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh/dashboard-metrics-scraper id=45c8357a-8573-44e0-920b-c45961ee8203 name=/runtime.v1.RuntimeService/StartContainer sandboxID=63e3683ec95a7b8ae6a80b4bd5dcc788703fb643d932c73c2ef512853bd5ff97
	Dec 27 10:25:38 old-k8s-version-482317 conmon[1641]: conmon 85802b12b64fec4b7359 <ninfo>: container 1643 exited with status 1
	Dec 27 10:25:39 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:39.136437828Z" level=info msg="Removing container: 237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741" id=77c387d3-035a-4295-8aac-e75aea39eafc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:25:39 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:39.144716098Z" level=info msg="Error loading conmon cgroup of container 237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741: cgroup deleted" id=77c387d3-035a-4295-8aac-e75aea39eafc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:25:39 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:39.149847021Z" level=info msg="Removed container 237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh/dashboard-metrics-scraper" id=77c387d3-035a-4295-8aac-e75aea39eafc name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.743026544Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.749236157Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.749276379Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.749305392Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.752485092Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.752644339Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.752714559Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.756195414Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.756355883Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.756396064Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.759538324Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:25:45 old-k8s-version-482317 crio[651]: time="2025-12-27T10:25:45.759574049Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	85802b12b64fe       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   63e3683ec95a7       dashboard-metrics-scraper-5f989dc9cf-fzrvh       kubernetes-dashboard
	7b0ce55a826e7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   c61e5b8b09a91       storage-provisioner                              kube-system
	0e925ff8e67ac       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   38a53d9a9816c       kubernetes-dashboard-8694d4445c-jnpvk            kubernetes-dashboard
	0579bd17b999c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   611714da5e406       coredns-5dd5756b68-xtcrs                         kube-system
	bfc8e3ab07b62       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   10dc585babfbd       busybox                                          default
	9b09d87f39c51       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   3fb6051b0d5d1       kube-proxy-gr6gq                                 kube-system
	a6a43cacb933a       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   415013ca60da2       kindnet-4jvpn                                    kube-system
	42968e8e6aa87       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   c61e5b8b09a91       storage-provisioner                              kube-system
	6af6762164868       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   1127ccd3e80e0       etcd-old-k8s-version-482317                      kube-system
	5c18700dae648       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   d42303fe7caf1       kube-apiserver-old-k8s-version-482317            kube-system
	7904f50147b3a       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   1280f8f573eb8       kube-controller-manager-old-k8s-version-482317   kube-system
	edba935460de1       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   827bd65104e88       kube-scheduler-old-k8s-version-482317            kube-system
	
	
	==> coredns [0579bd17b999c40f300161843dca65348d880147d408e942833f0e8a1efa1b67] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48560 - 10077 "HINFO IN 7406498039858455344.8016397607005876646. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015039485s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-482317
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-482317
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=old-k8s-version-482317
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_23_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:23:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-482317
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:25:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:25:35 +0000   Sat, 27 Dec 2025 10:23:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:25:35 +0000   Sat, 27 Dec 2025 10:23:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:25:35 +0000   Sat, 27 Dec 2025 10:23:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:25:35 +0000   Sat, 27 Dec 2025 10:24:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-482317
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                a7aa5659-6104-4ad4-974f-9a450eb0c75f
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-xtcrs                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-old-k8s-version-482317                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-4jvpn                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-482317             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-482317    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-gr6gq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-482317             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fzrvh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jnpvk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-482317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-482317 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-482317 event: Registered Node old-k8s-version-482317 in Controller
	  Normal  NodeReady                97s                    kubelet          Node old-k8s-version-482317 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 63s)      kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 63s)      kubelet          Node old-k8s-version-482317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 63s)      kubelet          Node old-k8s-version-482317 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-482317 event: Registered Node old-k8s-version-482317 in Controller
	
	
	==> dmesg <==
	[  +3.382865] overlayfs: idmapped layers are currently not supported
	[Dec27 09:53] overlayfs: idmapped layers are currently not supported
	[Dec27 09:57] overlayfs: idmapped layers are currently not supported
	[Dec27 09:58] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6af676216486829c31c72726886da4d5b9d2fdd5e03d47e9d092cd74c92823fd] <==
	{"level":"info","ts":"2025-12-27T10:25:00.080771Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:25:00.080783Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:25:00.081058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T10:25:00.081139Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-27T10:25:00.081245Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:25:00.081285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-27T10:25:00.102217Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T10:25:00.102643Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:25:00.102417Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:25:00.104494Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:25:00.117778Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:25:01.875577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:25:01.875711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:25:01.875764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:25:01.875803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:25:01.87584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:25:01.875881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:25:01.875914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:25:01.881892Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-482317 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:25:01.88207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:25:01.883089Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:25:01.887913Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:25:01.888912Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:25:01.894915Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:25:01.895009Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:26:01 up  2:08,  0 user,  load average: 1.50, 1.41, 1.80
	Linux old-k8s-version-482317 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a6a43cacb933af66d20a2d7793c31b0b116cbea7d00c6ae9dceb483bf2f0b2bd] <==
	I1227 10:25:05.557697       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:25:05.557928       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:25:05.558058       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:25:05.558070       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:25:05.558079       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:25:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:25:05.742883       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:25:05.742963       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:25:05.742975       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:25:05.743933       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:25:35.743694       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:25:35.743694       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:25:35.743829       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 10:25:35.744030       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 10:25:37.143294       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:25:37.143395       1 metrics.go:72] Registering metrics
	I1227 10:25:37.143476       1 controller.go:711] "Syncing nftables rules"
	I1227 10:25:45.742660       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:25:45.742713       1 main.go:301] handling current node
	I1227 10:25:55.743219       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:25:55.743268       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5c18700dae648beeb6cbc946e81f00349e9db29024a7fcb4389e4ebb5f3220e3] <==
	I1227 10:25:04.667029       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1227 10:25:04.862372       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1227 10:25:04.874141       1 aggregator.go:166] initial CRD sync complete...
	I1227 10:25:04.874239       1 autoregister_controller.go:141] Starting autoregister controller
	I1227 10:25:04.874270       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:25:04.874323       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:25:04.882645       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:25:04.945065       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1227 10:25:04.950717       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1227 10:25:04.950800       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1227 10:25:04.951849       1 shared_informer.go:318] Caches are synced for configmaps
	I1227 10:25:04.953481       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1227 10:25:04.953557       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:25:04.972145       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1227 10:25:05.601958       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1227 10:25:06.717026       1 controller.go:624] quota admission added evaluator for: namespaces
	I1227 10:25:06.764398       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1227 10:25:06.790377       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:25:06.801350       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:25:06.817047       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1227 10:25:06.878037       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.217.9"}
	I1227 10:25:06.901293       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.27.29"}
	I1227 10:25:17.170667       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1227 10:25:17.202566       1 controller.go:624] quota admission added evaluator for: endpoints
	I1227 10:25:17.214678       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7904f50147b3a49201ac12cc375f895cdfbd6570c8043be40e8f86a6040e4ba7] <==
	I1227 10:25:17.265651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.022µs"
	I1227 10:25:17.273719       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-jnpvk"
	I1227 10:25:17.285061       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-fzrvh"
	I1227 10:25:17.295120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.196079ms"
	I1227 10:25:17.308222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.576472ms"
	I1227 10:25:17.325175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="29.879791ms"
	I1227 10:25:17.325404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="89.863µs"
	I1227 10:25:17.328196       1 shared_informer.go:318] Caches are synced for disruption
	I1227 10:25:17.339952       1 shared_informer.go:318] Caches are synced for persistent volume
	I1227 10:25:17.349985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="41.612445ms"
	I1227 10:25:17.350073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.432µs"
	I1227 10:25:17.378277       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 10:25:17.417210       1 shared_informer.go:318] Caches are synced for resource quota
	I1227 10:25:17.721830       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:25:17.721948       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1227 10:25:17.748634       1 shared_informer.go:318] Caches are synced for garbage collector
	I1227 10:25:24.110500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.005846ms"
	I1227 10:25:24.110934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.112µs"
	I1227 10:25:28.110692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.233µs"
	I1227 10:25:29.119358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="90.848µs"
	I1227 10:25:30.120618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.867µs"
	I1227 10:25:39.152246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.952µs"
	I1227 10:25:42.111618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.283182ms"
	I1227 10:25:42.113044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="159.28µs"
	I1227 10:25:48.518893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.882µs"
	
	
	==> kube-proxy [9b09d87f39c5152bb435531967cef400dd6c3797b38f7965e024bc264e021c98] <==
	I1227 10:25:05.639394       1 server_others.go:69] "Using iptables proxy"
	I1227 10:25:05.680746       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1227 10:25:05.982514       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:25:05.985262       1 server_others.go:152] "Using iptables Proxier"
	I1227 10:25:05.985364       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1227 10:25:05.985450       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1227 10:25:05.987210       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1227 10:25:05.987661       1 server.go:846] "Version info" version="v1.28.0"
	I1227 10:25:05.987888       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:25:05.988653       1 config.go:188] "Starting service config controller"
	I1227 10:25:05.988725       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1227 10:25:05.988774       1 config.go:97] "Starting endpoint slice config controller"
	I1227 10:25:05.988801       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1227 10:25:05.989386       1 config.go:315] "Starting node config controller"
	I1227 10:25:05.989432       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1227 10:25:06.091632       1 shared_informer.go:318] Caches are synced for node config
	I1227 10:25:06.091662       1 shared_informer.go:318] Caches are synced for service config
	I1227 10:25:06.091688       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [edba935460de1d0d6cf628ac3e09f2ff27ad3160fda618c38a75071e4b54afcc] <==
	I1227 10:25:01.666678       1 serving.go:348] Generated self-signed cert in-memory
	W1227 10:25:04.836375       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:25:04.836472       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:25:04.836509       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:25:04.836540       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:25:04.899289       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1227 10:25:04.899691       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:25:04.901055       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:25:04.901127       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1227 10:25:04.902269       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1227 10:25:04.902342       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1227 10:25:05.004509       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 27 10:25:17 old-k8s-version-482317 kubelet[780]: I1227 10:25:17.301082     780 topology_manager.go:215] "Topology Admit Handler" podUID="e785d875-fcdd-4cd8-b425-45a6c5b06cca" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-fzrvh"
	Dec 27 10:25:17 old-k8s-version-482317 kubelet[780]: I1227 10:25:17.405950     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e785d875-fcdd-4cd8-b425-45a6c5b06cca-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-fzrvh\" (UID: \"e785d875-fcdd-4cd8-b425-45a6c5b06cca\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh"
	Dec 27 10:25:17 old-k8s-version-482317 kubelet[780]: I1227 10:25:17.406009     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6zpv\" (UniqueName: \"kubernetes.io/projected/e785d875-fcdd-4cd8-b425-45a6c5b06cca-kube-api-access-x6zpv\") pod \"dashboard-metrics-scraper-5f989dc9cf-fzrvh\" (UID: \"e785d875-fcdd-4cd8-b425-45a6c5b06cca\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh"
	Dec 27 10:25:17 old-k8s-version-482317 kubelet[780]: I1227 10:25:17.406038     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/15c15981-5af5-4212-be07-05f623f48f13-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-jnpvk\" (UID: \"15c15981-5af5-4212-be07-05f623f48f13\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jnpvk"
	Dec 27 10:25:17 old-k8s-version-482317 kubelet[780]: I1227 10:25:17.406089     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57tt4\" (UniqueName: \"kubernetes.io/projected/15c15981-5af5-4212-be07-05f623f48f13-kube-api-access-57tt4\") pod \"kubernetes-dashboard-8694d4445c-jnpvk\" (UID: \"15c15981-5af5-4212-be07-05f623f48f13\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jnpvk"
	Dec 27 10:25:18 old-k8s-version-482317 kubelet[780]: W1227 10:25:18.514723     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/crio-38a53d9a9816c177329ef78112d8c6977bac58ffff920adb70fd7e63f0594b61 WatchSource:0}: Error finding container 38a53d9a9816c177329ef78112d8c6977bac58ffff920adb70fd7e63f0594b61: Status 404 returned error can't find the container with id 38a53d9a9816c177329ef78112d8c6977bac58ffff920adb70fd7e63f0594b61
	Dec 27 10:25:18 old-k8s-version-482317 kubelet[780]: W1227 10:25:18.532686     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d3ed077d2566b4ba36cad6c0411ebe14cd0357e6f9010669344ae3054c5657eb/crio-63e3683ec95a7b8ae6a80b4bd5dcc788703fb643d932c73c2ef512853bd5ff97 WatchSource:0}: Error finding container 63e3683ec95a7b8ae6a80b4bd5dcc788703fb643d932c73c2ef512853bd5ff97: Status 404 returned error can't find the container with id 63e3683ec95a7b8ae6a80b4bd5dcc788703fb643d932c73c2ef512853bd5ff97
	Dec 27 10:25:28 old-k8s-version-482317 kubelet[780]: I1227 10:25:28.093356     780 scope.go:117] "RemoveContainer" containerID="5893c8551354ae850ea22df641c42d4c8685dbac3f6630b58ba7eb4aa5775777"
	Dec 27 10:25:28 old-k8s-version-482317 kubelet[780]: I1227 10:25:28.112310     780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jnpvk" podStartSLOduration=6.590110553 podCreationTimestamp="2025-12-27 10:25:17 +0000 UTC" firstStartedPulling="2025-12-27 10:25:18.518262424 +0000 UTC m=+19.770837667" lastFinishedPulling="2025-12-27 10:25:23.040393511 +0000 UTC m=+24.292968754" observedRunningTime="2025-12-27 10:25:24.100891941 +0000 UTC m=+25.353467183" watchObservedRunningTime="2025-12-27 10:25:28.11224164 +0000 UTC m=+29.364816883"
	Dec 27 10:25:29 old-k8s-version-482317 kubelet[780]: I1227 10:25:29.097849     780 scope.go:117] "RemoveContainer" containerID="5893c8551354ae850ea22df641c42d4c8685dbac3f6630b58ba7eb4aa5775777"
	Dec 27 10:25:29 old-k8s-version-482317 kubelet[780]: I1227 10:25:29.098713     780 scope.go:117] "RemoveContainer" containerID="237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741"
	Dec 27 10:25:29 old-k8s-version-482317 kubelet[780]: E1227 10:25:29.099185     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fzrvh_kubernetes-dashboard(e785d875-fcdd-4cd8-b425-45a6c5b06cca)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh" podUID="e785d875-fcdd-4cd8-b425-45a6c5b06cca"
	Dec 27 10:25:30 old-k8s-version-482317 kubelet[780]: I1227 10:25:30.104344     780 scope.go:117] "RemoveContainer" containerID="237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741"
	Dec 27 10:25:30 old-k8s-version-482317 kubelet[780]: E1227 10:25:30.104664     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fzrvh_kubernetes-dashboard(e785d875-fcdd-4cd8-b425-45a6c5b06cca)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh" podUID="e785d875-fcdd-4cd8-b425-45a6c5b06cca"
	Dec 27 10:25:36 old-k8s-version-482317 kubelet[780]: I1227 10:25:36.120813     780 scope.go:117] "RemoveContainer" containerID="42968e8e6aa87735f51cd79fbe984e9063af3193c09105ee955eb81677f295b5"
	Dec 27 10:25:38 old-k8s-version-482317 kubelet[780]: I1227 10:25:38.503247     780 scope.go:117] "RemoveContainer" containerID="237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741"
	Dec 27 10:25:39 old-k8s-version-482317 kubelet[780]: I1227 10:25:39.130744     780 scope.go:117] "RemoveContainer" containerID="237d5014e64c86e7173407771e4f826912e3d3fb0b7b9ea49cc6af3457a21741"
	Dec 27 10:25:39 old-k8s-version-482317 kubelet[780]: I1227 10:25:39.130946     780 scope.go:117] "RemoveContainer" containerID="85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1"
	Dec 27 10:25:39 old-k8s-version-482317 kubelet[780]: E1227 10:25:39.131533     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fzrvh_kubernetes-dashboard(e785d875-fcdd-4cd8-b425-45a6c5b06cca)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh" podUID="e785d875-fcdd-4cd8-b425-45a6c5b06cca"
	Dec 27 10:25:48 old-k8s-version-482317 kubelet[780]: I1227 10:25:48.504081     780 scope.go:117] "RemoveContainer" containerID="85802b12b64fec4b73591d3bc4e8b9986a52394c02aeaed963717aca57b2e9a1"
	Dec 27 10:25:48 old-k8s-version-482317 kubelet[780]: E1227 10:25:48.504958     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fzrvh_kubernetes-dashboard(e785d875-fcdd-4cd8-b425-45a6c5b06cca)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fzrvh" podUID="e785d875-fcdd-4cd8-b425-45a6c5b06cca"
	Dec 27 10:25:56 old-k8s-version-482317 kubelet[780]: I1227 10:25:56.157137     780 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 27 10:25:56 old-k8s-version-482317 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:25:56 old-k8s-version-482317 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:25:56 old-k8s-version-482317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0e925ff8e67acd4543b394e63d2b4c088abc3bdd579e1132f3c6096feceec216] <==
	2025/12/27 10:25:23 Starting overwatch
	2025/12/27 10:25:23 Using namespace: kubernetes-dashboard
	2025/12/27 10:25:23 Using in-cluster config to connect to apiserver
	2025/12/27 10:25:23 Using secret token for csrf signing
	2025/12/27 10:25:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:25:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:25:23 Successful initial request to the apiserver, version: v1.28.0
	2025/12/27 10:25:23 Generating JWE encryption key
	2025/12/27 10:25:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:25:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:25:23 Initializing JWE encryption key from synchronized object
	2025/12/27 10:25:23 Creating in-cluster Sidecar client
	2025/12/27 10:25:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:25:23 Serving insecurely on HTTP port: 9090
	2025/12/27 10:25:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [42968e8e6aa87735f51cd79fbe984e9063af3193c09105ee955eb81677f295b5] <==
	I1227 10:25:05.576516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:25:35.578833       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7b0ce55a826e7e76318fd3f47cc892955448f52ed3d924c467eb4effc59b9afa] <==
	I1227 10:25:36.169587       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:25:36.183584       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:25:36.183642       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1227 10:25:53.583422       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:25:53.583473       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a92e87e6-a9f2-4729-a034-2de7c1eae4b3", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-482317_ddffb137-5faf-42ed-be9f-be8d7290b420 became leader
	I1227 10:25:53.583708       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-482317_ddffb137-5faf-42ed-be9f-be8d7290b420!
	I1227 10:25:53.684010       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-482317_ddffb137-5faf-42ed-be9f-be8d7290b420!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-482317 -n old-k8s-version-482317
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-482317 -n old-k8s-version-482317: exit status 2 (404.936379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-482317 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (241.426783ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:26:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-784377 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-784377 describe deploy/metrics-server -n kube-system: exit status 1 (82.820025ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-784377 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-784377
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-784377:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94",
	        "Created": "2025-12-27T10:26:10.469840578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 494315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:26:10.539819906Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/hostname",
	        "HostsPath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/hosts",
	        "LogPath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94-json.log",
	        "Name": "/default-k8s-diff-port-784377",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-784377:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-784377",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94",
	                "LowerDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823/merged",
	                "UpperDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823/diff",
	                "WorkDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-784377",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-784377/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-784377",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-784377",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-784377",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "696b5fa623781f54457d07ad8d79fd36b9e0ad03c8cd86006e53ed73c2b4405d",
	            "SandboxKey": "/var/run/docker/netns/696b5fa62378",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-784377": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:8f:56:41:22:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8d733bf5719fdacab69d83d9ca4658b4a637aafdad81690293c70d13f01e7f9",
	                    "EndpointID": "c0b9e664be1a893d524e458702536d9c1b3bc49d416c20e770972b46e2a41084",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-784377",
	                        "e19c4a001b93"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-784377 logs -n 25
E1227 10:26:58.386628  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-784377 logs -n 25: (1.192155531s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-785247 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-785247                │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-785247                │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-785247                │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p cilium-785247 sudo crio config                                                                                                                                                                                                             │ cilium-785247                │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │                     │
	│ delete  │ -p cilium-785247                                                                                                                                                                                                                              │ cilium-785247                │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-528820       │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:17 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-528820       │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ delete  │ -p cert-expiration-528820                                                                                                                                                                                                                     │ cert-expiration-528820       │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ start   │ -p force-systemd-flag-915850 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │                     │
	│ delete  │ -p force-systemd-env-193016                                                                                                                                                                                                                   │ force-systemd-env-193016     │ jenkins │ v1.37.0 │ 27 Dec 25 10:22 UTC │ 27 Dec 25 10:22 UTC │
	│ start   │ -p cert-options-810217 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ cert-options-810217 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ -p cert-options-810217 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ delete  │ -p cert-options-810217                                                                                                                                                                                                                        │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │                     │
	│ stop    │ -p old-k8s-version-482317 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-482317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:25 UTC │
	│ image   │ old-k8s-version-482317 image list --format=json                                                                                                                                                                                               │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │ 27 Dec 25 10:25 UTC │
	│ pause   │ -p old-k8s-version-482317 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │                     │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:26:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:26:05.470092  493882 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:26:05.470297  493882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:26:05.470330  493882 out.go:374] Setting ErrFile to fd 2...
	I1227 10:26:05.470353  493882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:26:05.470638  493882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:26:05.471230  493882 out.go:368] Setting JSON to false
	I1227 10:26:05.472216  493882 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7719,"bootTime":1766823447,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:26:05.472336  493882 start.go:143] virtualization:  
	I1227 10:26:05.476760  493882 out.go:179] * [default-k8s-diff-port-784377] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:26:05.480439  493882 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:26:05.480532  493882 notify.go:221] Checking for updates...
	I1227 10:26:05.487322  493882 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:26:05.490655  493882 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:26:05.493965  493882 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:26:05.497175  493882 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:26:05.500355  493882 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:26:05.504133  493882 config.go:182] Loaded profile config "force-systemd-flag-915850": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:26:05.504259  493882 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:26:05.539153  493882 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:26:05.539285  493882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:26:05.596336  493882 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:26:05.586708427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:26:05.596451  493882 docker.go:319] overlay module found
	I1227 10:26:05.599702  493882 out.go:179] * Using the docker driver based on user configuration
	I1227 10:26:05.602731  493882 start.go:309] selected driver: docker
	I1227 10:26:05.602758  493882 start.go:928] validating driver "docker" against <nil>
	I1227 10:26:05.602780  493882 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:26:05.603537  493882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:26:05.658898  493882 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:26:05.649308497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:26:05.659079  493882 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:26:05.659309  493882 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:26:05.662312  493882 out.go:179] * Using Docker driver with root privileges
	I1227 10:26:05.665282  493882 cni.go:84] Creating CNI manager for ""
	I1227 10:26:05.665353  493882 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:26:05.665367  493882 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:26:05.665448  493882 start.go:353] cluster config:
	{Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:26:05.670491  493882 out.go:179] * Starting "default-k8s-diff-port-784377" primary control-plane node in "default-k8s-diff-port-784377" cluster
	I1227 10:26:05.673412  493882 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:26:05.676433  493882 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:26:05.679222  493882 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:26:05.679280  493882 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:26:05.679294  493882 cache.go:65] Caching tarball of preloaded images
	I1227 10:26:05.679292  493882 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:26:05.679374  493882 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:26:05.679384  493882 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:26:05.679493  493882 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/config.json ...
	I1227 10:26:05.679519  493882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/config.json: {Name:mkde177f00be4dc6430e5c9c03c82ce6138cc42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:26:05.699144  493882 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:26:05.699172  493882 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:26:05.699188  493882 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:26:05.699218  493882 start.go:360] acquireMachinesLock for default-k8s-diff-port-784377: {Name:mkae337831628ba1f53545c8de178f498d429381 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:26:05.699351  493882 start.go:364] duration metric: took 80.814µs to acquireMachinesLock for "default-k8s-diff-port-784377"
	I1227 10:26:05.699381  493882 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:26:05.699449  493882 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:26:05.702863  493882 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:26:05.703103  493882 start.go:159] libmachine.API.Create for "default-k8s-diff-port-784377" (driver="docker")
	I1227 10:26:05.703142  493882 client.go:173] LocalClient.Create starting
	I1227 10:26:05.703236  493882 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:26:05.703276  493882 main.go:144] libmachine: Decoding PEM data...
	I1227 10:26:05.703299  493882 main.go:144] libmachine: Parsing certificate...
	I1227 10:26:05.703358  493882 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:26:05.703381  493882 main.go:144] libmachine: Decoding PEM data...
	I1227 10:26:05.703393  493882 main.go:144] libmachine: Parsing certificate...
	I1227 10:26:05.703755  493882 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-784377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:26:05.735218  493882 cli_runner.go:211] docker network inspect default-k8s-diff-port-784377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:26:05.735302  493882 network_create.go:284] running [docker network inspect default-k8s-diff-port-784377] to gather additional debugging logs...
	I1227 10:26:05.735322  493882 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-784377
	W1227 10:26:05.761006  493882 cli_runner.go:211] docker network inspect default-k8s-diff-port-784377 returned with exit code 1
	I1227 10:26:05.761033  493882 network_create.go:287] error running [docker network inspect default-k8s-diff-port-784377]: docker network inspect default-k8s-diff-port-784377: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-784377 not found
	I1227 10:26:05.761047  493882 network_create.go:289] output of [docker network inspect default-k8s-diff-port-784377]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-784377 not found
	
	** /stderr **
	I1227 10:26:05.761156  493882 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:26:05.780681  493882 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:26:05.781144  493882 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:26:05.781479  493882 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:26:05.781981  493882 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a43070}
	I1227 10:26:05.782005  493882 network_create.go:124] attempt to create docker network default-k8s-diff-port-784377 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:26:05.782068  493882 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-784377 default-k8s-diff-port-784377
	I1227 10:26:05.840661  493882 network_create.go:108] docker network default-k8s-diff-port-784377 192.168.76.0/24 created
	I1227 10:26:05.840705  493882 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-784377" container
	I1227 10:26:05.840811  493882 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:26:05.857860  493882 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-784377 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-784377 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:26:05.877051  493882 oci.go:103] Successfully created a docker volume default-k8s-diff-port-784377
	I1227 10:26:05.877151  493882 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-784377-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-784377 --entrypoint /usr/bin/test -v default-k8s-diff-port-784377:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:26:06.430794  493882 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-784377
	I1227 10:26:06.430862  493882 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:26:06.430876  493882 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:26:06.430944  493882 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-784377:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:26:10.401370  493882 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-784377:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.970383158s)
	I1227 10:26:10.401404  493882 kic.go:203] duration metric: took 3.970524213s to extract preloaded images to volume ...
	W1227 10:26:10.401563  493882 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:26:10.401675  493882 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:26:10.454495  493882 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-784377 --name default-k8s-diff-port-784377 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-784377 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-784377 --network default-k8s-diff-port-784377 --ip 192.168.76.2 --volume default-k8s-diff-port-784377:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:26:10.757391  493882 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Running}}
	I1227 10:26:10.779321  493882 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:26:10.807222  493882 cli_runner.go:164] Run: docker exec default-k8s-diff-port-784377 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:26:10.902369  493882 oci.go:144] the created container "default-k8s-diff-port-784377" has a running status.
	I1227 10:26:10.902401  493882 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa...
	I1227 10:26:11.431601  493882 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:26:11.457263  493882 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:26:11.478618  493882 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:26:11.478637  493882 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-784377 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:26:11.539909  493882 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:26:11.578361  493882 machine.go:94] provisionDockerMachine start ...
	I1227 10:26:11.578441  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:11.605583  493882 main.go:144] libmachine: Using SSH client type: native
	I1227 10:26:11.605919  493882 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1227 10:26:11.605928  493882 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:26:11.767702  493882 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-784377
	
	I1227 10:26:11.767774  493882 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-784377"
	I1227 10:26:11.767880  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:11.790181  493882 main.go:144] libmachine: Using SSH client type: native
	I1227 10:26:11.790493  493882 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1227 10:26:11.790505  493882 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-784377 && echo "default-k8s-diff-port-784377" | sudo tee /etc/hostname
	I1227 10:26:11.958552  493882 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-784377
	
	I1227 10:26:11.958719  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:11.979674  493882 main.go:144] libmachine: Using SSH client type: native
	I1227 10:26:11.980014  493882 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1227 10:26:11.980040  493882 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-784377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-784377/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-784377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:26:12.132424  493882 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:26:12.132463  493882 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:26:12.132486  493882 ubuntu.go:190] setting up certificates
	I1227 10:26:12.132500  493882 provision.go:84] configureAuth start
	I1227 10:26:12.132561  493882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-784377
	I1227 10:26:12.155338  493882 provision.go:143] copyHostCerts
	I1227 10:26:12.155418  493882 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:26:12.155432  493882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:26:12.155513  493882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:26:12.155611  493882 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:26:12.155620  493882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:26:12.155647  493882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:26:12.155705  493882 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:26:12.155713  493882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:26:12.155744  493882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:26:12.155798  493882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-784377 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-784377 localhost minikube]
	I1227 10:26:12.459299  493882 provision.go:177] copyRemoteCerts
	I1227 10:26:12.459363  493882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:26:12.459402  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:12.485393  493882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:26:12.587702  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:26:12.604151  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 10:26:12.620945  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:26:12.637951  493882 provision.go:87] duration metric: took 505.424849ms to configureAuth
	I1227 10:26:12.637982  493882 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:26:12.638171  493882 config.go:182] Loaded profile config "default-k8s-diff-port-784377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:26:12.638278  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:12.655662  493882 main.go:144] libmachine: Using SSH client type: native
	I1227 10:26:12.656000  493882 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1227 10:26:12.656020  493882 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:26:12.959628  493882 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:26:12.959654  493882 machine.go:97] duration metric: took 1.381273831s to provisionDockerMachine
	I1227 10:26:12.959678  493882 client.go:176] duration metric: took 7.256524341s to LocalClient.Create
	I1227 10:26:12.959696  493882 start.go:167] duration metric: took 7.256594955s to libmachine.API.Create "default-k8s-diff-port-784377"
	I1227 10:26:12.959704  493882 start.go:293] postStartSetup for "default-k8s-diff-port-784377" (driver="docker")
	I1227 10:26:12.959713  493882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:26:12.959787  493882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:26:12.959843  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:12.977401  493882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:26:13.076244  493882 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:26:13.079729  493882 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:26:13.079761  493882 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:26:13.079774  493882 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:26:13.079829  493882 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:26:13.079910  493882 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:26:13.080044  493882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:26:13.087632  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:26:13.105909  493882 start.go:296] duration metric: took 146.190497ms for postStartSetup
	I1227 10:26:13.106316  493882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-784377
	I1227 10:26:13.123727  493882 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/config.json ...
	I1227 10:26:13.124042  493882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:26:13.124104  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:13.141566  493882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:26:13.241840  493882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:26:13.249808  493882 start.go:128] duration metric: took 7.550343734s to createHost
	I1227 10:26:13.249842  493882 start.go:83] releasing machines lock for "default-k8s-diff-port-784377", held for 7.550475608s
	I1227 10:26:13.249912  493882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-784377
	I1227 10:26:13.270636  493882 ssh_runner.go:195] Run: cat /version.json
	I1227 10:26:13.270705  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:13.270967  493882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:26:13.271041  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:13.289596  493882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:26:13.301617  493882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:26:13.396021  493882 ssh_runner.go:195] Run: systemctl --version
	I1227 10:26:13.484624  493882 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:26:13.520988  493882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:26:13.525613  493882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:26:13.525743  493882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:26:13.556072  493882 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:26:13.556098  493882 start.go:496] detecting cgroup driver to use...
	I1227 10:26:13.556146  493882 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:26:13.556217  493882 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:26:13.574209  493882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:26:13.587188  493882 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:26:13.587298  493882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:26:13.605163  493882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:26:13.624038  493882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:26:13.757058  493882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:26:13.876841  493882 docker.go:234] disabling docker service ...
	I1227 10:26:13.876940  493882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:26:13.898588  493882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:26:13.911267  493882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:26:14.041419  493882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:26:14.166477  493882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:26:14.179509  493882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:26:14.195262  493882 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:26:14.195369  493882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:26:14.205022  493882 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:26:14.205125  493882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:26:14.213872  493882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:26:14.222879  493882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:26:14.231564  493882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:26:14.239685  493882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:26:14.248612  493882 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:26:14.261931  493882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:26:14.271073  493882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:26:14.279038  493882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:26:14.286648  493882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:26:14.396864  493882 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:26:14.573310  493882 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:26:14.573461  493882 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:26:14.577742  493882 start.go:574] Will wait 60s for crictl version
	I1227 10:26:14.577807  493882 ssh_runner.go:195] Run: which crictl
	I1227 10:26:14.581478  493882 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:26:14.605868  493882 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:26:14.605953  493882 ssh_runner.go:195] Run: crio --version
	I1227 10:26:14.633585  493882 ssh_runner.go:195] Run: crio --version
	I1227 10:26:14.666269  493882 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:26:14.669159  493882 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-784377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:26:14.685444  493882 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:26:14.689489  493882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:26:14.699469  493882 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:26:14.699596  493882 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:26:14.699686  493882 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:26:14.735324  493882 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:26:14.735344  493882 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:26:14.735400  493882 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:26:14.772305  493882 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:26:14.772330  493882 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:26:14.772338  493882 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I1227 10:26:14.772439  493882 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-784377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:26:14.772534  493882 ssh_runner.go:195] Run: crio config
	I1227 10:26:14.831251  493882 cni.go:84] Creating CNI manager for ""
	I1227 10:26:14.831271  493882 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:26:14.831287  493882 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:26:14.831310  493882 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-784377 NodeName:default-k8s-diff-port-784377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:26:14.831440  493882 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-784377"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:26:14.831511  493882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:26:14.839211  493882 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:26:14.839296  493882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:26:14.846942  493882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 10:26:14.859929  493882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:26:14.872779  493882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1227 10:26:14.885246  493882 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:26:14.888893  493882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:26:14.898888  493882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:26:15.039713  493882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:26:15.058408  493882 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377 for IP: 192.168.76.2
	I1227 10:26:15.058438  493882 certs.go:195] generating shared ca certs ...
	I1227 10:26:15.058495  493882 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:26:15.058713  493882 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:26:15.059277  493882 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:26:15.059319  493882 certs.go:257] generating profile certs ...
	I1227 10:26:15.059463  493882 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.key
	I1227 10:26:15.059485  493882 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt with IP's: []
	I1227 10:26:15.242186  493882 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt ...
	I1227 10:26:15.242220  493882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: {Name:mk2c09d4933b239ce08924b30e3470cab2a2100a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:26:15.242449  493882 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.key ...
	I1227 10:26:15.242466  493882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.key: {Name:mk8478f3ad91360a3c9092cac606ebf3ac7ac5a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:26:15.242564  493882 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.key.e1bcd003
	I1227 10:26:15.242581  493882 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.crt.e1bcd003 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 10:26:15.476377  493882 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.crt.e1bcd003 ...
	I1227 10:26:15.476414  493882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.crt.e1bcd003: {Name:mkd5f2fb14e7dad7e81ea318f4cc8ce490a597cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:26:15.476583  493882 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.key.e1bcd003 ...
	I1227 10:26:15.476602  493882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.key.e1bcd003: {Name:mk59463e076e3ec68a63bfb26b87182849da33c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:26:15.476683  493882 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.crt.e1bcd003 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.crt
	I1227 10:26:15.476770  493882 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.key.e1bcd003 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.key
	I1227 10:26:15.476834  493882 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.key
	I1227 10:26:15.476852  493882 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.crt with IP's: []
	I1227 10:26:15.811679  493882 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.crt ...
	I1227 10:26:15.811716  493882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.crt: {Name:mk4ab4c69196c2e610f57a912c967f21db22f30b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:26:15.811895  493882 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.key ...
	I1227 10:26:15.811910  493882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.key: {Name:mk76275fbddad633f59db52299d88587996bea77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:26:15.812122  493882 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:26:15.812172  493882 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:26:15.812185  493882 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:26:15.812213  493882 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:26:15.812247  493882 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:26:15.812273  493882 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:26:15.812319  493882 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:26:15.812906  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:26:15.832650  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:26:15.852311  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:26:15.872644  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:26:15.891863  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 10:26:15.910797  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:26:15.929253  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:26:15.947753  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 10:26:15.973494  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:26:15.995101  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:26:16.020539  493882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:26:16.039107  493882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:26:16.053026  493882 ssh_runner.go:195] Run: openssl version
	I1227 10:26:16.059615  493882 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:26:16.067659  493882 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:26:16.075538  493882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:26:16.079619  493882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:26:16.079707  493882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:26:16.121036  493882 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:26:16.128766  493882 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/299811.pem /etc/ssl/certs/51391683.0
	I1227 10:26:16.136525  493882 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:26:16.144442  493882 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:26:16.152481  493882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:26:16.156522  493882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:26:16.156683  493882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:26:16.198334  493882 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:26:16.207938  493882 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2998112.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:26:16.217572  493882 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:26:16.227305  493882 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:26:16.238476  493882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:26:16.243613  493882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:26:16.243691  493882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:26:16.287345  493882 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:26:16.301098  493882 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:26:16.308812  493882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:26:16.312497  493882 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:26:16.312553  493882 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:26:16.312628  493882 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:26:16.312687  493882 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:26:16.341005  493882 cri.go:96] found id: ""
	I1227 10:26:16.341082  493882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:26:16.349410  493882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:26:16.358117  493882 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:26:16.358214  493882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:26:16.366179  493882 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:26:16.366201  493882 kubeadm.go:158] found existing configuration files:
	
	I1227 10:26:16.366256  493882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1227 10:26:16.374610  493882 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:26:16.374686  493882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:26:16.382469  493882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1227 10:26:16.390816  493882 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:26:16.390883  493882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:26:16.398625  493882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1227 10:26:16.406590  493882 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:26:16.406676  493882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:26:16.414279  493882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1227 10:26:16.422415  493882 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:26:16.422529  493882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:26:16.430331  493882 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:26:16.473242  493882 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:26:16.473507  493882 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:26:16.544605  493882 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:26:16.544744  493882 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:26:16.544800  493882 kubeadm.go:319] OS: Linux
	I1227 10:26:16.544874  493882 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:26:16.544952  493882 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:26:16.545027  493882 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:26:16.545106  493882 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:26:16.545183  493882 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:26:16.545263  493882 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:26:16.545338  493882 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:26:16.545416  493882 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:26:16.545488  493882 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:26:16.612839  493882 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:26:16.613040  493882 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:26:16.613192  493882 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:26:16.626375  493882 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:26:16.630645  493882 out.go:252]   - Generating certificates and keys ...
	I1227 10:26:16.630833  493882 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:26:16.630955  493882 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:26:17.114444  493882 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:26:17.433997  493882 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:26:17.550335  493882 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:26:17.699241  493882 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:26:17.819515  493882 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:26:17.820116  493882 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-784377 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:26:17.995520  493882 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:26:17.995785  493882 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-784377 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:26:18.237742  493882 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:26:18.498145  493882 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:26:18.837430  493882 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:26:18.837777  493882 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:26:19.566525  493882 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:26:19.834061  493882 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:26:20.170981  493882 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:26:20.340735  493882 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:26:20.643629  493882 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:26:20.644484  493882 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:26:20.647484  493882 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:26:20.650849  493882 out.go:252]   - Booting up control plane ...
	I1227 10:26:20.650953  493882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:26:20.651042  493882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:26:20.652218  493882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:26:20.668127  493882 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:26:20.668679  493882 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:26:20.677519  493882 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:26:20.678185  493882 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:26:20.678411  493882 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:26:20.836460  493882 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:26:20.836625  493882 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:26:21.833544  493882 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001892681s
	I1227 10:26:21.837406  493882 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 10:26:21.837531  493882 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1227 10:26:21.837636  493882 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 10:26:21.837725  493882 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 10:26:23.351363  493882 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.513373612s
	I1227 10:26:24.777326  493882 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.939892115s
	I1227 10:26:26.339147  493882 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501532241s
	I1227 10:26:26.373201  493882 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 10:26:26.396676  493882 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 10:26:26.410922  493882 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 10:26:26.411133  493882 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-784377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 10:26:26.423961  493882 kubeadm.go:319] [bootstrap-token] Using token: 5mo6y1.uhm4nn9a4rdomi5y
	I1227 10:26:26.426902  493882 out.go:252]   - Configuring RBAC rules ...
	I1227 10:26:26.427028  493882 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 10:26:26.433099  493882 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 10:26:26.440855  493882 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 10:26:26.445132  493882 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 10:26:26.451231  493882 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 10:26:26.455453  493882 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 10:26:26.751378  493882 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 10:26:27.189272  493882 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 10:26:27.746088  493882 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 10:26:27.747422  493882 kubeadm.go:319] 
	I1227 10:26:27.747542  493882 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 10:26:27.747562  493882 kubeadm.go:319] 
	I1227 10:26:27.747641  493882 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 10:26:27.747649  493882 kubeadm.go:319] 
	I1227 10:26:27.747674  493882 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 10:26:27.747738  493882 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 10:26:27.747791  493882 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 10:26:27.747800  493882 kubeadm.go:319] 
	I1227 10:26:27.747854  493882 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 10:26:27.747863  493882 kubeadm.go:319] 
	I1227 10:26:27.747911  493882 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 10:26:27.747920  493882 kubeadm.go:319] 
	I1227 10:26:27.747995  493882 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 10:26:27.748073  493882 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 10:26:27.748145  493882 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 10:26:27.748154  493882 kubeadm.go:319] 
	I1227 10:26:27.748245  493882 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 10:26:27.748325  493882 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 10:26:27.748333  493882 kubeadm.go:319] 
	I1227 10:26:27.748417  493882 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 5mo6y1.uhm4nn9a4rdomi5y \
	I1227 10:26:27.748524  493882 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 \
	I1227 10:26:27.748548  493882 kubeadm.go:319] 	--control-plane 
	I1227 10:26:27.748558  493882 kubeadm.go:319] 
	I1227 10:26:27.748643  493882 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 10:26:27.748652  493882 kubeadm.go:319] 
	I1227 10:26:27.748734  493882 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 5mo6y1.uhm4nn9a4rdomi5y \
	I1227 10:26:27.748840  493882 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 
	I1227 10:26:27.753754  493882 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:26:27.754209  493882 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:26:27.754354  493882 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:26:27.754371  493882 cni.go:84] Creating CNI manager for ""
	I1227 10:26:27.754379  493882 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:26:27.759418  493882 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 10:26:27.762362  493882 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 10:26:27.766567  493882 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 10:26:27.766589  493882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 10:26:27.781131  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 10:26:28.076393  493882 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 10:26:28.076546  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:26:28.076624  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-784377 minikube.k8s.io/updated_at=2025_12_27T10_26_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=default-k8s-diff-port-784377 minikube.k8s.io/primary=true
	I1227 10:26:28.237848  493882 ops.go:34] apiserver oom_adj: -16
	I1227 10:26:28.237971  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:26:28.738912  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:26:29.238587  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:26:29.738858  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:26:30.238080  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:26:30.738487  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:26:31.238078  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:26:31.738043  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:26:32.238342  493882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:26:32.344271  493882 kubeadm.go:1114] duration metric: took 4.267776232s to wait for elevateKubeSystemPrivileges
	I1227 10:26:32.344310  493882 kubeadm.go:403] duration metric: took 16.031761124s to StartCluster
	I1227 10:26:32.344329  493882 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:26:32.344407  493882 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:26:32.345007  493882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:26:32.345236  493882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:26:32.345245  493882 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:26:32.345478  493882 config.go:182] Loaded profile config "default-k8s-diff-port-784377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:26:32.345511  493882 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:26:32.345569  493882 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-784377"
	I1227 10:26:32.345587  493882 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-784377"
	I1227 10:26:32.345607  493882 host.go:66] Checking if "default-k8s-diff-port-784377" exists ...
	I1227 10:26:32.346054  493882 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:26:32.346230  493882 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-784377"
	I1227 10:26:32.346265  493882 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-784377"
	I1227 10:26:32.346581  493882 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:26:32.350874  493882 out.go:179] * Verifying Kubernetes components...
	I1227 10:26:32.353781  493882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:26:32.399823  493882 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-784377"
	I1227 10:26:32.399908  493882 host.go:66] Checking if "default-k8s-diff-port-784377" exists ...
	I1227 10:26:32.400388  493882 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:26:32.402933  493882 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:26:32.407099  493882 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:26:32.407128  493882 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:26:32.407203  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:32.437838  493882 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:26:32.437906  493882 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:26:32.438004  493882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:26:32.449585  493882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:26:32.477713  493882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:26:32.636548  493882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:26:32.687532  493882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:26:32.788876  493882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:26:32.851952  493882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:26:33.355048  493882 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 10:26:33.356818  493882 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-784377" to be "Ready" ...
	I1227 10:26:33.769960  493882 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 10:26:33.772926  493882 addons.go:530] duration metric: took 1.42739568s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 10:26:33.861198  493882 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-784377" context rescaled to 1 replicas
	W1227 10:26:35.360374  493882 node_ready.go:57] node "default-k8s-diff-port-784377" has "Ready":"False" status (will retry)
	W1227 10:26:37.860471  493882 node_ready.go:57] node "default-k8s-diff-port-784377" has "Ready":"False" status (will retry)
	W1227 10:26:40.361012  493882 node_ready.go:57] node "default-k8s-diff-port-784377" has "Ready":"False" status (will retry)
	W1227 10:26:42.860336  493882 node_ready.go:57] node "default-k8s-diff-port-784377" has "Ready":"False" status (will retry)
	W1227 10:26:45.362936  493882 node_ready.go:57] node "default-k8s-diff-port-784377" has "Ready":"False" status (will retry)
	I1227 10:26:45.860312  493882 node_ready.go:49] node "default-k8s-diff-port-784377" is "Ready"
	I1227 10:26:45.860339  493882 node_ready.go:38] duration metric: took 12.503483816s for node "default-k8s-diff-port-784377" to be "Ready" ...
	I1227 10:26:45.860353  493882 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:26:45.860412  493882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:26:45.872909  493882 api_server.go:72] duration metric: took 13.527632744s to wait for apiserver process to appear ...
	I1227 10:26:45.872938  493882 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:26:45.872962  493882 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1227 10:26:45.880962  493882 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1227 10:26:45.882055  493882 api_server.go:141] control plane version: v1.35.0
	I1227 10:26:45.882086  493882 api_server.go:131] duration metric: took 9.139382ms to wait for apiserver health ...
	I1227 10:26:45.882095  493882 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:26:45.885316  493882 system_pods.go:59] 8 kube-system pods found
	I1227 10:26:45.885353  493882 system_pods.go:61] "coredns-7d764666f9-kzx9l" [76a78735-c0bd-4e61-96b8-27aa62f2d606] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:26:45.885361  493882 system_pods.go:61] "etcd-default-k8s-diff-port-784377" [3c119831-097f-402d-84ac-5174f6e07ad1] Running
	I1227 10:26:45.885367  493882 system_pods.go:61] "kindnet-sf4gn" [a46b9960-4c5b-4044-91fe-c24fb6ada404] Running
	I1227 10:26:45.885373  493882 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-784377" [3dc5cc47-c595-4435-9951-fa7812ebb41a] Running
	I1227 10:26:45.885378  493882 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-784377" [888c0c83-4e0a-449d-ad85-f6b8da83749f] Running
	I1227 10:26:45.885383  493882 system_pods.go:61] "kube-proxy-qczcb" [c8664f73-e55c-41d1-b3d6-d8c69735ea44] Running
	I1227 10:26:45.885390  493882 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-784377" [4bb42d3b-c5d7-40b0-b4d1-8a81f0d2a721] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:26:45.885401  493882 system_pods.go:61] "storage-provisioner" [58d25c76-fba6-4b47-b0f2-3505d7df97db] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:26:45.885414  493882 system_pods.go:74] duration metric: took 3.312239ms to wait for pod list to return data ...
	I1227 10:26:45.885428  493882 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:26:45.888136  493882 default_sa.go:45] found service account: "default"
	I1227 10:26:45.888162  493882 default_sa.go:55] duration metric: took 2.728003ms for default service account to be created ...
	I1227 10:26:45.888172  493882 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:26:45.891171  493882 system_pods.go:86] 8 kube-system pods found
	I1227 10:26:45.891212  493882 system_pods.go:89] "coredns-7d764666f9-kzx9l" [76a78735-c0bd-4e61-96b8-27aa62f2d606] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:26:45.891219  493882 system_pods.go:89] "etcd-default-k8s-diff-port-784377" [3c119831-097f-402d-84ac-5174f6e07ad1] Running
	I1227 10:26:45.891226  493882 system_pods.go:89] "kindnet-sf4gn" [a46b9960-4c5b-4044-91fe-c24fb6ada404] Running
	I1227 10:26:45.891231  493882 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-784377" [3dc5cc47-c595-4435-9951-fa7812ebb41a] Running
	I1227 10:26:45.891236  493882 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-784377" [888c0c83-4e0a-449d-ad85-f6b8da83749f] Running
	I1227 10:26:45.891240  493882 system_pods.go:89] "kube-proxy-qczcb" [c8664f73-e55c-41d1-b3d6-d8c69735ea44] Running
	I1227 10:26:45.891247  493882 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-784377" [4bb42d3b-c5d7-40b0-b4d1-8a81f0d2a721] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:26:45.891262  493882 system_pods.go:89] "storage-provisioner" [58d25c76-fba6-4b47-b0f2-3505d7df97db] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:26:45.891291  493882 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 10:26:46.145299  493882 system_pods.go:86] 8 kube-system pods found
	I1227 10:26:46.145340  493882 system_pods.go:89] "coredns-7d764666f9-kzx9l" [76a78735-c0bd-4e61-96b8-27aa62f2d606] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:26:46.145348  493882 system_pods.go:89] "etcd-default-k8s-diff-port-784377" [3c119831-097f-402d-84ac-5174f6e07ad1] Running
	I1227 10:26:46.145355  493882 system_pods.go:89] "kindnet-sf4gn" [a46b9960-4c5b-4044-91fe-c24fb6ada404] Running
	I1227 10:26:46.145360  493882 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-784377" [3dc5cc47-c595-4435-9951-fa7812ebb41a] Running
	I1227 10:26:46.145365  493882 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-784377" [888c0c83-4e0a-449d-ad85-f6b8da83749f] Running
	I1227 10:26:46.145370  493882 system_pods.go:89] "kube-proxy-qczcb" [c8664f73-e55c-41d1-b3d6-d8c69735ea44] Running
	I1227 10:26:46.145389  493882 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-784377" [4bb42d3b-c5d7-40b0-b4d1-8a81f0d2a721] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:26:46.145401  493882 system_pods.go:89] "storage-provisioner" [58d25c76-fba6-4b47-b0f2-3505d7df97db] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:26:46.493614  493882 system_pods.go:86] 8 kube-system pods found
	I1227 10:26:46.493652  493882 system_pods.go:89] "coredns-7d764666f9-kzx9l" [76a78735-c0bd-4e61-96b8-27aa62f2d606] Running
	I1227 10:26:46.493661  493882 system_pods.go:89] "etcd-default-k8s-diff-port-784377" [3c119831-097f-402d-84ac-5174f6e07ad1] Running
	I1227 10:26:46.493666  493882 system_pods.go:89] "kindnet-sf4gn" [a46b9960-4c5b-4044-91fe-c24fb6ada404] Running
	I1227 10:26:46.493671  493882 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-784377" [3dc5cc47-c595-4435-9951-fa7812ebb41a] Running
	I1227 10:26:46.493676  493882 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-784377" [888c0c83-4e0a-449d-ad85-f6b8da83749f] Running
	I1227 10:26:46.493681  493882 system_pods.go:89] "kube-proxy-qczcb" [c8664f73-e55c-41d1-b3d6-d8c69735ea44] Running
	I1227 10:26:46.493689  493882 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-784377" [4bb42d3b-c5d7-40b0-b4d1-8a81f0d2a721] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:26:46.493701  493882 system_pods.go:89] "storage-provisioner" [58d25c76-fba6-4b47-b0f2-3505d7df97db] Running
	I1227 10:26:46.493709  493882 system_pods.go:126] duration metric: took 605.532376ms to wait for k8s-apps to be running ...
	I1227 10:26:46.493721  493882 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:26:46.493777  493882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:26:46.506714  493882 system_svc.go:56] duration metric: took 12.982186ms WaitForService to wait for kubelet
	I1227 10:26:46.506790  493882 kubeadm.go:587] duration metric: took 14.161516798s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:26:46.506823  493882 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:26:46.509746  493882 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:26:46.509782  493882 node_conditions.go:123] node cpu capacity is 2
	I1227 10:26:46.509797  493882 node_conditions.go:105] duration metric: took 2.968276ms to run NodePressure ...
	I1227 10:26:46.509811  493882 start.go:242] waiting for startup goroutines ...
	I1227 10:26:46.509818  493882 start.go:247] waiting for cluster config update ...
	I1227 10:26:46.509831  493882 start.go:256] writing updated cluster config ...
	I1227 10:26:46.510137  493882 ssh_runner.go:195] Run: rm -f paused
	I1227 10:26:46.514063  493882 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:26:46.517742  493882 pod_ready.go:83] waiting for pod "coredns-7d764666f9-kzx9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:46.523217  493882 pod_ready.go:94] pod "coredns-7d764666f9-kzx9l" is "Ready"
	I1227 10:26:46.523250  493882 pod_ready.go:86] duration metric: took 5.477569ms for pod "coredns-7d764666f9-kzx9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:46.525832  493882 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:46.531016  493882 pod_ready.go:94] pod "etcd-default-k8s-diff-port-784377" is "Ready"
	I1227 10:26:46.531049  493882 pod_ready.go:86] duration metric: took 5.18987ms for pod "etcd-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:46.533557  493882 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:46.538368  493882 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-784377" is "Ready"
	I1227 10:26:46.538398  493882 pod_ready.go:86] duration metric: took 4.81325ms for pod "kube-apiserver-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:46.540849  493882 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:46.917727  493882 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-784377" is "Ready"
	I1227 10:26:46.917795  493882 pod_ready.go:86] duration metric: took 376.916694ms for pod "kube-controller-manager-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:47.118063  493882 pod_ready.go:83] waiting for pod "kube-proxy-qczcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:47.517645  493882 pod_ready.go:94] pod "kube-proxy-qczcb" is "Ready"
	I1227 10:26:47.517675  493882 pod_ready.go:86] duration metric: took 399.570354ms for pod "kube-proxy-qczcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:47.719809  493882 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:48.117950  493882 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-784377" is "Ready"
	I1227 10:26:48.117985  493882 pod_ready.go:86] duration metric: took 398.109169ms for pod "kube-scheduler-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:26:48.117999  493882 pod_ready.go:40] duration metric: took 1.603900974s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:26:48.174986  493882 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:26:48.178239  493882 out.go:203] 
	W1227 10:26:48.181197  493882 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:26:48.184131  493882 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:26:48.187843  493882 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-784377" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:26:46 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:46.030690925Z" level=info msg="Created container 70b95b6856d60ff6e93e956058c80fa504e0ae3287f3169f11a0a4201aced873: kube-system/coredns-7d764666f9-kzx9l/coredns" id=3373d708-4fec-485b-a2aa-178ba6c57370 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:26:46 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:46.032136888Z" level=info msg="Starting container: 70b95b6856d60ff6e93e956058c80fa504e0ae3287f3169f11a0a4201aced873" id=5ef2c000-ded2-4679-a77e-c0e193d7ee12 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:26:46 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:46.034987305Z" level=info msg="Started container" PID=1763 containerID=70b95b6856d60ff6e93e956058c80fa504e0ae3287f3169f11a0a4201aced873 description=kube-system/coredns-7d764666f9-kzx9l/coredns id=5ef2c000-ded2-4679-a77e-c0e193d7ee12 name=/runtime.v1.RuntimeService/StartContainer sandboxID=11b2804e041c93ee3890ea45e63d087cc831a1ae51be41357fad4a1d511562b0
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.708378454Z" level=info msg="Running pod sandbox: default/busybox/POD" id=83bdb677-4757-44aa-96c0-8ee282c37d45 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.708464445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.713721039Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1a3b46021d742f5cf8fd1a14e01374f29211e9c5e7333c499a442fb503043607 UID:64a81aa2-3d2b-45b0-8ec3-053991c36e9f NetNS:/var/run/netns/c13e8db3-b3a3-4006-a033-dd1c80a02733 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002058920}] Aliases:map[]}"
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.713894399Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.724822624Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1a3b46021d742f5cf8fd1a14e01374f29211e9c5e7333c499a442fb503043607 UID:64a81aa2-3d2b-45b0-8ec3-053991c36e9f NetNS:/var/run/netns/c13e8db3-b3a3-4006-a033-dd1c80a02733 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002058920}] Aliases:map[]}"
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.72514284Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.727881133Z" level=info msg="Ran pod sandbox 1a3b46021d742f5cf8fd1a14e01374f29211e9c5e7333c499a442fb503043607 with infra container: default/busybox/POD" id=83bdb677-4757-44aa-96c0-8ee282c37d45 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.731373951Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=34f94d1d-bd79-4f33-b108-71db90f98486 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.731518494Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=34f94d1d-bd79-4f33-b108-71db90f98486 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.731560571Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=34f94d1d-bd79-4f33-b108-71db90f98486 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.732877778Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=78a89802-0603-4cc2-99e0-8a5eb7592116 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:26:48 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:48.735950038Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 10:26:50 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:50.724036229Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=78a89802-0603-4cc2-99e0-8a5eb7592116 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:26:50 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:50.725910865Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=87935e80-b1bd-41ea-850b-ec8ae581f169 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:26:50 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:50.729900657Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=00addd3f-4fc6-4405-9424-807b37bd1519 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:26:50 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:50.737102346Z" level=info msg="Creating container: default/busybox/busybox" id=e24501ad-ccd1-45e9-a493-3450934ea066 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:26:50 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:50.737261388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:26:50 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:50.741988492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:26:50 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:50.742664306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:26:50 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:50.775724806Z" level=info msg="Created container 408edaf9efd21b201b4962667fb19212fd302ec0880564df499a39b502927786: default/busybox/busybox" id=e24501ad-ccd1-45e9-a493-3450934ea066 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:26:50 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:50.780466417Z" level=info msg="Starting container: 408edaf9efd21b201b4962667fb19212fd302ec0880564df499a39b502927786" id=caeb5a0b-0ae2-4bc3-b652-caaf103aa522 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:26:50 default-k8s-diff-port-784377 crio[836]: time="2025-12-27T10:26:50.782367277Z" level=info msg="Started container" PID=1816 containerID=408edaf9efd21b201b4962667fb19212fd302ec0880564df499a39b502927786 description=default/busybox/busybox id=caeb5a0b-0ae2-4bc3-b652-caaf103aa522 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1a3b46021d742f5cf8fd1a14e01374f29211e9c5e7333c499a442fb503043607
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	408edaf9efd21       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   1a3b46021d742       busybox                                                default
	70b95b6856d60       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      12 seconds ago      Running             coredns                   0                   11b2804e041c9       coredns-7d764666f9-kzx9l                               kube-system
	43de6e5a80701       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   999c7a3b9737a       storage-provisioner                                    kube-system
	7abfd72cbc368       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   14462c1bf052b       kindnet-sf4gn                                          kube-system
	059edf6075c09       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      25 seconds ago      Running             kube-proxy                0                   2b59454bfdf94       kube-proxy-qczcb                                       kube-system
	bbf7c0a5fe2c8       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      36 seconds ago      Running             etcd                      0                   b07d4309d5c61       etcd-default-k8s-diff-port-784377                      kube-system
	0b63e827d330d       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      36 seconds ago      Running             kube-controller-manager   0                   7ab56f769c4e9       kube-controller-manager-default-k8s-diff-port-784377   kube-system
	5ca443fd58274       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      36 seconds ago      Running             kube-scheduler            0                   86abbe8c00a1e       kube-scheduler-default-k8s-diff-port-784377            kube-system
	a145ce8f15e9f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      36 seconds ago      Running             kube-apiserver            0                   8fe8c3c67b2ee       kube-apiserver-default-k8s-diff-port-784377            kube-system
	
	
	==> coredns [70b95b6856d60ff6e93e956058c80fa504e0ae3287f3169f11a0a4201aced873] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34966 - 324 "HINFO IN 1972951222939001175.2151351458541657280. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012867854s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-784377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-784377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=default-k8s-diff-port-784377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_26_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:26:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-784377
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:26:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:26:57 +0000   Sat, 27 Dec 2025 10:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:26:57 +0000   Sat, 27 Dec 2025 10:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:26:57 +0000   Sat, 27 Dec 2025 10:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:26:57 +0000   Sat, 27 Dec 2025 10:26:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-784377
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                0c39d998-d532-41c6-a784-b1225108f230
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-kzx9l                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-default-k8s-diff-port-784377                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-sf4gn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-784377             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-784377    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-qczcb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-784377             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node default-k8s-diff-port-784377 event: Registered Node default-k8s-diff-port-784377 in Controller
	
	
	==> dmesg <==
	[Dec27 09:53] overlayfs: idmapped layers are currently not supported
	[Dec27 09:57] overlayfs: idmapped layers are currently not supported
	[Dec27 09:58] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bbf7c0a5fe2c8e850fcc9cb76d29982086ce1e0d4a1f11a19b67abd77c5e863e] <==
	{"level":"info","ts":"2025-12-27T10:26:22.064346Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:26:22.336008Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T10:26:22.336131Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T10:26:22.336201Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T10:26:22.336279Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:26:22.336323Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:26:22.340032Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:26:22.340125Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:26:22.340199Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T10:26:22.340232Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:26:22.344257Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:26:22.344439Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-784377 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:26:22.344529Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:26:22.349579Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:26:22.349768Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:26:22.349837Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:26:22.349917Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T10:26:22.350041Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T10:26:22.350089Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:26:22.363275Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:26:22.363385Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:26:22.369294Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:26:22.370293Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:26:22.386447Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:26:22.387202Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 10:26:59 up  2:09,  0 user,  load average: 2.53, 1.75, 1.89
	Linux default-k8s-diff-port-784377 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7abfd72cbc3683ebb8c6380c8abb7747994e0e618ea10c01879c3ec358c16753] <==
	I1227 10:26:35.121625       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:26:35.122367       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:26:35.122547       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:26:35.122569       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:26:35.122581       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:26:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:26:35.417687       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:26:35.417802       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:26:35.417837       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:26:35.418580       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:26:35.618230       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:26:35.618371       1 metrics.go:72] Registering metrics
	I1227 10:26:35.618509       1 controller.go:711] "Syncing nftables rules"
	I1227 10:26:45.417715       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:26:45.417781       1 main.go:301] handling current node
	I1227 10:26:55.420050       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:26:55.420086       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a145ce8f15e9f383956622ac2a11038397fd757542bbc89bcc54e3b4294ffb78] <==
	I1227 10:26:24.775314       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:26:24.789146       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:26:24.794661       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:26:24.795160       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:24.804667       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:26:24.813414       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:26:24.818507       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:26:24.838200       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:26:25.470154       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 10:26:25.475234       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 10:26:25.475256       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:26:26.206197       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:26:26.259747       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:26:26.386752       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 10:26:26.403471       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 10:26:26.405139       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:26:26.412416       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:26:26.720553       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:26:27.161898       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:26:27.188014       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 10:26:27.206580       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 10:26:32.271889       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:26:32.279675       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:26:32.644290       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:26:32.734272       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0b63e827d330dbbb69e9c35161f1ed0e2dc6b5ae4ac81a482c3c99c5b4570c0a] <==
	I1227 10:26:31.528129       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.530202       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.530250       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.530468       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.530585       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.530612       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.530640       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.530696       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 10:26:31.530764       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-784377"
	I1227 10:26:31.530823       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 10:26:31.531786       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.532896       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.533086       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.533435       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.533486       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.533507       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.533963       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.537735       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:26:31.541921       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.550359       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-784377" podCIDRs=["10.244.0.0/24"]
	I1227 10:26:31.632008       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:31.632036       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:26:31.632055       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:26:31.637967       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:46.533946       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [059edf6075c0928c21e27c14157eef20bd2506e29fa6a6da6487a358307f7d2c] <==
	I1227 10:26:33.350399       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:26:33.621799       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:26:33.723549       1 shared_informer.go:377] "Caches are synced"
	I1227 10:26:33.723599       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:26:33.723693       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:26:33.866636       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:26:33.868040       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:26:33.885130       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:26:33.885470       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:26:33.885492       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:26:33.886694       1 config.go:200] "Starting service config controller"
	I1227 10:26:33.886705       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:26:33.886740       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:26:33.886746       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:26:33.886757       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:26:33.886762       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:26:33.887402       1 config.go:309] "Starting node config controller"
	I1227 10:26:33.887410       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:26:33.887419       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:26:33.987404       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:26:33.987403       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:26:33.987467       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5ca443fd582741b3c2bcd591b29a454474d5bee81e58d8ea44710374c81f21a7] <==
	E1227 10:26:24.791606       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:26:24.791731       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:26:24.791827       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:26:24.793173       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:26:24.793314       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:26:24.793692       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:26:24.793749       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:26:24.793797       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:26:24.793849       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:26:24.793892       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:26:24.793939       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:26:24.793979       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:26:24.794019       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 10:26:24.794101       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:26:24.794150       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:26:24.794197       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:26:25.634997       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:26:25.635875       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:26:25.704376       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:26:25.775302       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:26:25.876342       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:26:25.886263       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:26:25.890164       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:26:25.937105       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I1227 10:26:27.657855       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:26:32 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:32.896279    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a46b9960-4c5b-4044-91fe-c24fb6ada404-lib-modules\") pod \"kindnet-sf4gn\" (UID: \"a46b9960-4c5b-4044-91fe-c24fb6ada404\") " pod="kube-system/kindnet-sf4gn"
	Dec 27 10:26:32 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:32.896298    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvfk\" (UniqueName: \"kubernetes.io/projected/a46b9960-4c5b-4044-91fe-c24fb6ada404-kube-api-access-pgvfk\") pod \"kindnet-sf4gn\" (UID: \"a46b9960-4c5b-4044-91fe-c24fb6ada404\") " pod="kube-system/kindnet-sf4gn"
	Dec 27 10:26:32 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:32.896315    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8664f73-e55c-41d1-b3d6-d8c69735ea44-lib-modules\") pod \"kube-proxy-qczcb\" (UID: \"c8664f73-e55c-41d1-b3d6-d8c69735ea44\") " pod="kube-system/kube-proxy-qczcb"
	Dec 27 10:26:32 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:32.896377    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a46b9960-4c5b-4044-91fe-c24fb6ada404-xtables-lock\") pod \"kindnet-sf4gn\" (UID: \"a46b9960-4c5b-4044-91fe-c24fb6ada404\") " pod="kube-system/kindnet-sf4gn"
	Dec 27 10:26:32 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:32.896395    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c8664f73-e55c-41d1-b3d6-d8c69735ea44-kube-proxy\") pod \"kube-proxy-qczcb\" (UID: \"c8664f73-e55c-41d1-b3d6-d8c69735ea44\") " pod="kube-system/kube-proxy-qczcb"
	Dec 27 10:26:33 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:33.039817    1292 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:26:33 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:33.336434    1292 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-qczcb" podStartSLOduration=1.336418417 podStartE2EDuration="1.336418417s" podCreationTimestamp="2025-12-27 10:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:26:33.336330162 +0000 UTC m=+6.312525185" watchObservedRunningTime="2025-12-27 10:26:33.336418417 +0000 UTC m=+6.312613448"
	Dec 27 10:26:37 default-k8s-diff-port-784377 kubelet[1292]: E1227 10:26:37.056458    1292 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-784377" containerName="kube-scheduler"
	Dec 27 10:26:37 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:37.070387    1292 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-sf4gn" podStartSLOduration=3.217953319 podStartE2EDuration="5.070371021s" podCreationTimestamp="2025-12-27 10:26:32 +0000 UTC" firstStartedPulling="2025-12-27 10:26:33.167584819 +0000 UTC m=+6.143779841" lastFinishedPulling="2025-12-27 10:26:35.02000252 +0000 UTC m=+7.996197543" observedRunningTime="2025-12-27 10:26:35.323403527 +0000 UTC m=+8.299598558" watchObservedRunningTime="2025-12-27 10:26:37.070371021 +0000 UTC m=+10.046566052"
	Dec 27 10:26:39 default-k8s-diff-port-784377 kubelet[1292]: E1227 10:26:39.245362    1292 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-784377" containerName="kube-apiserver"
	Dec 27 10:26:40 default-k8s-diff-port-784377 kubelet[1292]: E1227 10:26:40.045169    1292 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-784377" containerName="kube-controller-manager"
	Dec 27 10:26:41 default-k8s-diff-port-784377 kubelet[1292]: E1227 10:26:41.791628    1292 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-784377" containerName="etcd"
	Dec 27 10:26:45 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:45.568089    1292 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 10:26:45 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:45.702894    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/58d25c76-fba6-4b47-b0f2-3505d7df97db-tmp\") pod \"storage-provisioner\" (UID: \"58d25c76-fba6-4b47-b0f2-3505d7df97db\") " pod="kube-system/storage-provisioner"
	Dec 27 10:26:45 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:45.702949    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a78735-c0bd-4e61-96b8-27aa62f2d606-config-volume\") pod \"coredns-7d764666f9-kzx9l\" (UID: \"76a78735-c0bd-4e61-96b8-27aa62f2d606\") " pod="kube-system/coredns-7d764666f9-kzx9l"
	Dec 27 10:26:45 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:45.702974    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cldx\" (UniqueName: \"kubernetes.io/projected/76a78735-c0bd-4e61-96b8-27aa62f2d606-kube-api-access-7cldx\") pod \"coredns-7d764666f9-kzx9l\" (UID: \"76a78735-c0bd-4e61-96b8-27aa62f2d606\") " pod="kube-system/coredns-7d764666f9-kzx9l"
	Dec 27 10:26:45 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:45.703001    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbhkq\" (UniqueName: \"kubernetes.io/projected/58d25c76-fba6-4b47-b0f2-3505d7df97db-kube-api-access-bbhkq\") pod \"storage-provisioner\" (UID: \"58d25c76-fba6-4b47-b0f2-3505d7df97db\") " pod="kube-system/storage-provisioner"
	Dec 27 10:26:45 default-k8s-diff-port-784377 kubelet[1292]: W1227 10:26:45.974536    1292 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/crio-11b2804e041c93ee3890ea45e63d087cc831a1ae51be41357fad4a1d511562b0 WatchSource:0}: Error finding container 11b2804e041c93ee3890ea45e63d087cc831a1ae51be41357fad4a1d511562b0: Status 404 returned error can't find the container with id 11b2804e041c93ee3890ea45e63d087cc831a1ae51be41357fad4a1d511562b0
	Dec 27 10:26:46 default-k8s-diff-port-784377 kubelet[1292]: E1227 10:26:46.336980    1292 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-kzx9l" containerName="coredns"
	Dec 27 10:26:46 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:46.367878    1292 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-kzx9l" podStartSLOduration=14.367859641999999 podStartE2EDuration="14.367859642s" podCreationTimestamp="2025-12-27 10:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:26:46.353496232 +0000 UTC m=+19.329691271" watchObservedRunningTime="2025-12-27 10:26:46.367859642 +0000 UTC m=+19.344054665"
	Dec 27 10:26:46 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:46.389313    1292 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.389295443 podStartE2EDuration="13.389295443s" podCreationTimestamp="2025-12-27 10:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:26:46.369219532 +0000 UTC m=+19.345414563" watchObservedRunningTime="2025-12-27 10:26:46.389295443 +0000 UTC m=+19.365490466"
	Dec 27 10:26:47 default-k8s-diff-port-784377 kubelet[1292]: E1227 10:26:47.064868    1292 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-784377" containerName="kube-scheduler"
	Dec 27 10:26:47 default-k8s-diff-port-784377 kubelet[1292]: E1227 10:26:47.341047    1292 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-kzx9l" containerName="coredns"
	Dec 27 10:26:48 default-k8s-diff-port-784377 kubelet[1292]: E1227 10:26:48.343535    1292 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-kzx9l" containerName="coredns"
	Dec 27 10:26:48 default-k8s-diff-port-784377 kubelet[1292]: I1227 10:26:48.426038    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbnl6\" (UniqueName: \"kubernetes.io/projected/64a81aa2-3d2b-45b0-8ec3-053991c36e9f-kube-api-access-qbnl6\") pod \"busybox\" (UID: \"64a81aa2-3d2b-45b0-8ec3-053991c36e9f\") " pod="default/busybox"
	
	
	==> storage-provisioner [43de6e5a807018cc5b2c038953c963b10fd5e421fb8cadc38b4e10263427a029] <==
	I1227 10:26:46.006781       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:26:46.020941       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:26:46.021609       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:26:46.026572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:46.039919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:26:46.048176       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:26:46.048407       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-784377_f42214b8-b768-464f-81da-a8ac08ca2a3d!
	I1227 10:26:46.049383       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d0a5a360-e8dc-4c99-8635-c67876792b94", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-784377_f42214b8-b768-464f-81da-a8ac08ca2a3d became leader
	W1227 10:26:46.058555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:46.066249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:26:46.149396       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-784377_f42214b8-b768-464f-81da-a8ac08ca2a3d!
	W1227 10:26:48.069973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:48.075188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:50.079007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:50.083963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:52.088001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:52.093393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:54.096631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:54.101074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:56.105040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:56.109567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:58.113850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:26:58.119727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-784377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-784377 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-784377 --alsologtostderr -v=1: exit status 80 (2.454713889s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-784377 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:28:11.366532  500396 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:28:11.366657  500396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:28:11.366669  500396 out.go:374] Setting ErrFile to fd 2...
	I1227 10:28:11.366676  500396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:28:11.367103  500396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:28:11.367437  500396 out.go:368] Setting JSON to false
	I1227 10:28:11.367462  500396 mustload.go:66] Loading cluster: default-k8s-diff-port-784377
	I1227 10:28:11.368187  500396 config.go:182] Loaded profile config "default-k8s-diff-port-784377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:28:11.368916  500396 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:28:11.386865  500396 host.go:66] Checking if "default-k8s-diff-port-784377" exists ...
	I1227 10:28:11.387208  500396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:28:11.463792  500396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 10:28:11.45267435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:28:11.464553  500396 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-784377 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:28:11.471493  500396 out.go:179] * Pausing node default-k8s-diff-port-784377 ... 
	I1227 10:28:11.475118  500396 host.go:66] Checking if "default-k8s-diff-port-784377" exists ...
	I1227 10:28:11.475492  500396 ssh_runner.go:195] Run: systemctl --version
	I1227 10:28:11.475558  500396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:28:11.493360  500396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:28:11.595059  500396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:28:11.608592  500396 pause.go:52] kubelet running: true
	I1227 10:28:11.608666  500396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:28:11.859421  500396 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:28:11.859513  500396 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:28:11.927253  500396 cri.go:96] found id: "ad46cb813a6e5f720bcfeede245df35f45ad054aad108d8e71b257b2fda7fff6"
	I1227 10:28:11.927280  500396 cri.go:96] found id: "257cb8d68427e28263c7b7cfc7c556f6a7f666cc3864d1e63a3754d34f6811c0"
	I1227 10:28:11.927298  500396 cri.go:96] found id: "db115809dfa3abd66df9cffc3c51771033570d02c2b6cd1553eaa64df166aa8f"
	I1227 10:28:11.927303  500396 cri.go:96] found id: "26cd0898742aefbe3c2ea283eb8b4ba807fc319c4c99f42beca15d9b71897019"
	I1227 10:28:11.927306  500396 cri.go:96] found id: "884d1d9a3f6b6d4e5ce1a17c115ad0f1b2b6ab7bce7608cb12ecf6a2e5c23c23"
	I1227 10:28:11.927309  500396 cri.go:96] found id: "09c17b1ddb55eb299dcb7712ef525f6d66206db384cf3265ba341b6fbab81ddb"
	I1227 10:28:11.927312  500396 cri.go:96] found id: "341d59906e656a82c9766c6d3a223e1725be3195ff00ad2041b4404437e3f112"
	I1227 10:28:11.927317  500396 cri.go:96] found id: "b7d4b7ac920dca9cef9cb7f9bfbabba055bba950b9a98343d443ec7b05ee967a"
	I1227 10:28:11.927320  500396 cri.go:96] found id: "ae0f1af189b62b675aaca897e11c40c3b47839880ed544ff4c379f37a3b95b8d"
	I1227 10:28:11.927327  500396 cri.go:96] found id: "c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793"
	I1227 10:28:11.927331  500396 cri.go:96] found id: "ab4c87363417a0b5b98320affadd839b09afd291a3764dcc69becacd5d94a9de"
	I1227 10:28:11.927334  500396 cri.go:96] found id: ""
	I1227 10:28:11.927421  500396 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:28:11.938716  500396 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:28:11Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:28:12.194214  500396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:28:12.207340  500396 pause.go:52] kubelet running: false
	I1227 10:28:12.207416  500396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:28:12.392638  500396 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:28:12.392728  500396 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:28:12.468587  500396 cri.go:96] found id: "ad46cb813a6e5f720bcfeede245df35f45ad054aad108d8e71b257b2fda7fff6"
	I1227 10:28:12.468612  500396 cri.go:96] found id: "257cb8d68427e28263c7b7cfc7c556f6a7f666cc3864d1e63a3754d34f6811c0"
	I1227 10:28:12.468622  500396 cri.go:96] found id: "db115809dfa3abd66df9cffc3c51771033570d02c2b6cd1553eaa64df166aa8f"
	I1227 10:28:12.468627  500396 cri.go:96] found id: "26cd0898742aefbe3c2ea283eb8b4ba807fc319c4c99f42beca15d9b71897019"
	I1227 10:28:12.468630  500396 cri.go:96] found id: "884d1d9a3f6b6d4e5ce1a17c115ad0f1b2b6ab7bce7608cb12ecf6a2e5c23c23"
	I1227 10:28:12.468634  500396 cri.go:96] found id: "09c17b1ddb55eb299dcb7712ef525f6d66206db384cf3265ba341b6fbab81ddb"
	I1227 10:28:12.468638  500396 cri.go:96] found id: "341d59906e656a82c9766c6d3a223e1725be3195ff00ad2041b4404437e3f112"
	I1227 10:28:12.468641  500396 cri.go:96] found id: "b7d4b7ac920dca9cef9cb7f9bfbabba055bba950b9a98343d443ec7b05ee967a"
	I1227 10:28:12.468644  500396 cri.go:96] found id: "ae0f1af189b62b675aaca897e11c40c3b47839880ed544ff4c379f37a3b95b8d"
	I1227 10:28:12.468650  500396 cri.go:96] found id: "c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793"
	I1227 10:28:12.468653  500396 cri.go:96] found id: "ab4c87363417a0b5b98320affadd839b09afd291a3764dcc69becacd5d94a9de"
	I1227 10:28:12.468656  500396 cri.go:96] found id: ""
	I1227 10:28:12.468708  500396 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:28:12.713181  500396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:28:12.725978  500396 pause.go:52] kubelet running: false
	I1227 10:28:12.726044  500396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:28:12.890628  500396 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:28:12.890761  500396 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:28:12.956560  500396 cri.go:96] found id: "ad46cb813a6e5f720bcfeede245df35f45ad054aad108d8e71b257b2fda7fff6"
	I1227 10:28:12.956583  500396 cri.go:96] found id: "257cb8d68427e28263c7b7cfc7c556f6a7f666cc3864d1e63a3754d34f6811c0"
	I1227 10:28:12.956589  500396 cri.go:96] found id: "db115809dfa3abd66df9cffc3c51771033570d02c2b6cd1553eaa64df166aa8f"
	I1227 10:28:12.956593  500396 cri.go:96] found id: "26cd0898742aefbe3c2ea283eb8b4ba807fc319c4c99f42beca15d9b71897019"
	I1227 10:28:12.956602  500396 cri.go:96] found id: "884d1d9a3f6b6d4e5ce1a17c115ad0f1b2b6ab7bce7608cb12ecf6a2e5c23c23"
	I1227 10:28:12.956606  500396 cri.go:96] found id: "09c17b1ddb55eb299dcb7712ef525f6d66206db384cf3265ba341b6fbab81ddb"
	I1227 10:28:12.956609  500396 cri.go:96] found id: "341d59906e656a82c9766c6d3a223e1725be3195ff00ad2041b4404437e3f112"
	I1227 10:28:12.956612  500396 cri.go:96] found id: "b7d4b7ac920dca9cef9cb7f9bfbabba055bba950b9a98343d443ec7b05ee967a"
	I1227 10:28:12.956615  500396 cri.go:96] found id: "ae0f1af189b62b675aaca897e11c40c3b47839880ed544ff4c379f37a3b95b8d"
	I1227 10:28:12.956621  500396 cri.go:96] found id: "c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793"
	I1227 10:28:12.956625  500396 cri.go:96] found id: "ab4c87363417a0b5b98320affadd839b09afd291a3764dcc69becacd5d94a9de"
	I1227 10:28:12.956628  500396 cri.go:96] found id: ""
	I1227 10:28:12.956676  500396 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:28:13.465648  500396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:28:13.490142  500396 pause.go:52] kubelet running: false
	I1227 10:28:13.490251  500396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:28:13.666928  500396 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:28:13.667023  500396 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:28:13.739280  500396 cri.go:96] found id: "ad46cb813a6e5f720bcfeede245df35f45ad054aad108d8e71b257b2fda7fff6"
	I1227 10:28:13.739360  500396 cri.go:96] found id: "257cb8d68427e28263c7b7cfc7c556f6a7f666cc3864d1e63a3754d34f6811c0"
	I1227 10:28:13.739373  500396 cri.go:96] found id: "db115809dfa3abd66df9cffc3c51771033570d02c2b6cd1553eaa64df166aa8f"
	I1227 10:28:13.739377  500396 cri.go:96] found id: "26cd0898742aefbe3c2ea283eb8b4ba807fc319c4c99f42beca15d9b71897019"
	I1227 10:28:13.739381  500396 cri.go:96] found id: "884d1d9a3f6b6d4e5ce1a17c115ad0f1b2b6ab7bce7608cb12ecf6a2e5c23c23"
	I1227 10:28:13.739384  500396 cri.go:96] found id: "09c17b1ddb55eb299dcb7712ef525f6d66206db384cf3265ba341b6fbab81ddb"
	I1227 10:28:13.739387  500396 cri.go:96] found id: "341d59906e656a82c9766c6d3a223e1725be3195ff00ad2041b4404437e3f112"
	I1227 10:28:13.739390  500396 cri.go:96] found id: "b7d4b7ac920dca9cef9cb7f9bfbabba055bba950b9a98343d443ec7b05ee967a"
	I1227 10:28:13.739393  500396 cri.go:96] found id: "ae0f1af189b62b675aaca897e11c40c3b47839880ed544ff4c379f37a3b95b8d"
	I1227 10:28:13.739400  500396 cri.go:96] found id: "c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793"
	I1227 10:28:13.739402  500396 cri.go:96] found id: "ab4c87363417a0b5b98320affadd839b09afd291a3764dcc69becacd5d94a9de"
	I1227 10:28:13.739405  500396 cri.go:96] found id: ""
	I1227 10:28:13.739528  500396 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:28:13.755100  500396 out.go:203] 
	W1227 10:28:13.758337  500396 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:28:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:28:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:28:13.758364  500396 out.go:285] * 
	* 
	W1227 10:28:13.760959  500396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:28:13.764229  500396 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-784377 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-784377
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-784377:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94",
	        "Created": "2025-12-27T10:26:10.469840578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 497843,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:27:12.71478662Z",
	            "FinishedAt": "2025-12-27T10:27:11.928017342Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/hostname",
	        "HostsPath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/hosts",
	        "LogPath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94-json.log",
	        "Name": "/default-k8s-diff-port-784377",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-784377:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-784377",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94",
	                "LowerDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823/merged",
	                "UpperDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823/diff",
	                "WorkDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-784377",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-784377/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-784377",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-784377",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-784377",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08e264ea8c33cff4bc498ce2fbe2bc30a4b1856dc1de78fcde3c8c15cb3be7a1",
	            "SandboxKey": "/var/run/docker/netns/08e264ea8c33",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-784377": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:82:b8:08:55:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8d733bf5719fdacab69d83d9ca4658b4a637aafdad81690293c70d13f01e7f9",
	                    "EndpointID": "f880b87a28672bf5ab9826b19253cccba4f2225f44d3ebb16f5ab5a961313a87",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-784377",
	                        "e19c4a001b93"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377: exit status 2 (354.441365ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-784377 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-784377 logs -n 25: (1.275792897s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-528820       │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:17 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-528820       │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ delete  │ -p cert-expiration-528820                                                                                                                                                                                                                     │ cert-expiration-528820       │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ start   │ -p force-systemd-flag-915850 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │                     │
	│ delete  │ -p force-systemd-env-193016                                                                                                                                                                                                                   │ force-systemd-env-193016     │ jenkins │ v1.37.0 │ 27 Dec 25 10:22 UTC │ 27 Dec 25 10:22 UTC │
	│ start   │ -p cert-options-810217 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ cert-options-810217 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ -p cert-options-810217 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ delete  │ -p cert-options-810217                                                                                                                                                                                                                        │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │                     │
	│ stop    │ -p old-k8s-version-482317 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-482317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:25 UTC │
	│ image   │ old-k8s-version-482317 image list --format=json                                                                                                                                                                                               │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │ 27 Dec 25 10:25 UTC │
	│ pause   │ -p old-k8s-version-482317 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │                     │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-784377 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-784377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ image   │ default-k8s-diff-port-784377 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ pause   │ -p default-k8s-diff-port-784377 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:27:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:27:12.433195  497714 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:27:12.433388  497714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:27:12.433418  497714 out.go:374] Setting ErrFile to fd 2...
	I1227 10:27:12.433440  497714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:27:12.433711  497714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:27:12.434148  497714 out.go:368] Setting JSON to false
	I1227 10:27:12.435051  497714 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7786,"bootTime":1766823447,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:27:12.435152  497714 start.go:143] virtualization:  
	I1227 10:27:12.439265  497714 out.go:179] * [default-k8s-diff-port-784377] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:27:12.442544  497714 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:27:12.442617  497714 notify.go:221] Checking for updates...
	I1227 10:27:12.448656  497714 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:27:12.451589  497714 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:27:12.456340  497714 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:27:12.459255  497714 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:27:12.463104  497714 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:27:12.466525  497714 config.go:182] Loaded profile config "default-k8s-diff-port-784377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:27:12.467208  497714 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:27:12.507044  497714 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:27:12.507194  497714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:27:12.567276  497714 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:27:12.557964052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:27:12.567380  497714 docker.go:319] overlay module found
	I1227 10:27:12.570566  497714 out.go:179] * Using the docker driver based on existing profile
	I1227 10:27:12.573449  497714 start.go:309] selected driver: docker
	I1227 10:27:12.573468  497714 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:27:12.573587  497714 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:27:12.574271  497714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:27:12.627060  497714 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:27:12.617570699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:27:12.627398  497714 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:27:12.627433  497714 cni.go:84] Creating CNI manager for ""
	I1227 10:27:12.627491  497714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:27:12.627537  497714 start.go:353] cluster config:
	{Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:27:12.630661  497714 out.go:179] * Starting "default-k8s-diff-port-784377" primary control-plane node in "default-k8s-diff-port-784377" cluster
	I1227 10:27:12.633474  497714 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:27:12.636449  497714 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:27:12.639284  497714 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:27:12.639380  497714 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:27:12.639324  497714 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:27:12.639411  497714 cache.go:65] Caching tarball of preloaded images
	I1227 10:27:12.639537  497714 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:27:12.639547  497714 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:27:12.639660  497714 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/config.json ...
	I1227 10:27:12.658926  497714 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:27:12.658945  497714 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:27:12.658966  497714 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:27:12.658998  497714 start.go:360] acquireMachinesLock for default-k8s-diff-port-784377: {Name:mkae337831628ba1f53545c8de178f498d429381 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:27:12.659059  497714 start.go:364] duration metric: took 44.957µs to acquireMachinesLock for "default-k8s-diff-port-784377"
	I1227 10:27:12.659081  497714 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:27:12.659086  497714 fix.go:54] fixHost starting: 
	I1227 10:27:12.659346  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:12.675867  497714 fix.go:112] recreateIfNeeded on default-k8s-diff-port-784377: state=Stopped err=<nil>
	W1227 10:27:12.675897  497714 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:27:12.679229  497714 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-784377" ...
	I1227 10:27:12.679331  497714 cli_runner.go:164] Run: docker start default-k8s-diff-port-784377
	I1227 10:27:12.936023  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:12.958294  497714 kic.go:430] container "default-k8s-diff-port-784377" state is running.
	I1227 10:27:12.959019  497714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-784377
	I1227 10:27:12.983011  497714 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/config.json ...
	I1227 10:27:12.983237  497714 machine.go:94] provisionDockerMachine start ...
	I1227 10:27:12.983308  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:13.007269  497714 main.go:144] libmachine: Using SSH client type: native
	I1227 10:27:13.007622  497714 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 10:27:13.007632  497714 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:27:13.008842  497714 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:27:16.148406  497714 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-784377
	
	I1227 10:27:16.148431  497714 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-784377"
	I1227 10:27:16.148503  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:16.167605  497714 main.go:144] libmachine: Using SSH client type: native
	I1227 10:27:16.167925  497714 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 10:27:16.167945  497714 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-784377 && echo "default-k8s-diff-port-784377" | sudo tee /etc/hostname
	I1227 10:27:16.322139  497714 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-784377
	
	I1227 10:27:16.322289  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:16.340214  497714 main.go:144] libmachine: Using SSH client type: native
	I1227 10:27:16.340546  497714 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 10:27:16.340571  497714 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-784377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-784377/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-784377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:27:16.484433  497714 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:27:16.484466  497714 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:27:16.484497  497714 ubuntu.go:190] setting up certificates
	I1227 10:27:16.484507  497714 provision.go:84] configureAuth start
	I1227 10:27:16.484571  497714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-784377
	I1227 10:27:16.501907  497714 provision.go:143] copyHostCerts
	I1227 10:27:16.501978  497714 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:27:16.501998  497714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:27:16.502085  497714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:27:16.502198  497714 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:27:16.502210  497714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:27:16.502240  497714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:27:16.502310  497714 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:27:16.502319  497714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:27:16.502345  497714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:27:16.502433  497714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-784377 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-784377 localhost minikube]
	I1227 10:27:16.738693  497714 provision.go:177] copyRemoteCerts
	I1227 10:27:16.738767  497714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:27:16.738824  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:16.755754  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:16.855886  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:27:16.873298  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 10:27:16.890999  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:27:16.909030  497714 provision.go:87] duration metric: took 424.498692ms to configureAuth
	I1227 10:27:16.909059  497714 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:27:16.909260  497714 config.go:182] Loaded profile config "default-k8s-diff-port-784377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:27:16.909366  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:16.926902  497714 main.go:144] libmachine: Using SSH client type: native
	I1227 10:27:16.927229  497714 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 10:27:16.927245  497714 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:27:17.299230  497714 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:27:17.299272  497714 machine.go:97] duration metric: took 4.316004457s to provisionDockerMachine
	I1227 10:27:17.299284  497714 start.go:293] postStartSetup for "default-k8s-diff-port-784377" (driver="docker")
	I1227 10:27:17.299295  497714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:27:17.299356  497714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:27:17.299398  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:17.319229  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:17.420095  497714 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:27:17.423425  497714 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:27:17.423455  497714 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:27:17.423486  497714 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:27:17.423548  497714 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:27:17.423674  497714 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:27:17.423780  497714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:27:17.431052  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:27:17.448519  497714 start.go:296] duration metric: took 149.219464ms for postStartSetup
	I1227 10:27:17.448623  497714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:27:17.448676  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:17.465456  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:17.561042  497714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:27:17.565652  497714 fix.go:56] duration metric: took 4.906559263s for fixHost
	I1227 10:27:17.565682  497714 start.go:83] releasing machines lock for "default-k8s-diff-port-784377", held for 4.90661249s
	I1227 10:27:17.565750  497714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-784377
	I1227 10:27:17.582866  497714 ssh_runner.go:195] Run: cat /version.json
	I1227 10:27:17.582934  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:17.583203  497714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:27:17.583268  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:17.602686  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:17.604828  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:17.699785  497714 ssh_runner.go:195] Run: systemctl --version
	I1227 10:27:17.796528  497714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:27:17.831310  497714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:27:17.835644  497714 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:27:17.835776  497714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:27:17.843669  497714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:27:17.843696  497714 start.go:496] detecting cgroup driver to use...
	I1227 10:27:17.843747  497714 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:27:17.843801  497714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:27:17.859622  497714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:27:17.873045  497714 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:27:17.873126  497714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:27:17.888334  497714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:27:17.901671  497714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:27:18.012837  497714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:27:18.135702  497714 docker.go:234] disabling docker service ...
	I1227 10:27:18.135823  497714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:27:18.151349  497714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:27:18.164676  497714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:27:18.274725  497714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:27:18.394497  497714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:27:18.407215  497714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:27:18.421706  497714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:27:18.421832  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.430900  497714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:27:18.431013  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.439868  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.449603  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.459743  497714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:27:18.472140  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.481636  497714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.499518  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.510371  497714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:27:18.518501  497714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:27:18.529412  497714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:27:18.646930  497714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:27:18.838881  497714 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:27:18.838975  497714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:27:18.842953  497714 start.go:574] Will wait 60s for crictl version
	I1227 10:27:18.843051  497714 ssh_runner.go:195] Run: which crictl
	I1227 10:27:18.846837  497714 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:27:18.872152  497714 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:27:18.872238  497714 ssh_runner.go:195] Run: crio --version
	I1227 10:27:18.903046  497714 ssh_runner.go:195] Run: crio --version
	I1227 10:27:18.935812  497714 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:27:18.938734  497714 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-784377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:27:18.955095  497714 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:27:18.959147  497714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:27:18.968949  497714 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:27:18.969070  497714 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:27:18.969132  497714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:27:19.006596  497714 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:27:19.006624  497714 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:27:19.006696  497714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:27:19.032509  497714 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:27:19.032533  497714 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:27:19.032542  497714 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I1227 10:27:19.032642  497714 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-784377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:27:19.032729  497714 ssh_runner.go:195] Run: crio config
	I1227 10:27:19.091645  497714 cni.go:84] Creating CNI manager for ""
	I1227 10:27:19.091679  497714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:27:19.091703  497714 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:27:19.091760  497714 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-784377 NodeName:default-k8s-diff-port-784377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:27:19.091921  497714 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-784377"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:27:19.092065  497714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:27:19.099796  497714 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:27:19.099908  497714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:27:19.107401  497714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 10:27:19.120606  497714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:27:19.133806  497714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1227 10:27:19.146997  497714 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:27:19.150955  497714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:27:19.160987  497714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:27:19.312025  497714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:27:19.332704  497714 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377 for IP: 192.168.76.2
	I1227 10:27:19.332729  497714 certs.go:195] generating shared ca certs ...
	I1227 10:27:19.332764  497714 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:27:19.332903  497714 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:27:19.332954  497714 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:27:19.332965  497714 certs.go:257] generating profile certs ...
	I1227 10:27:19.333058  497714 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.key
	I1227 10:27:19.333134  497714 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.key.e1bcd003
	I1227 10:27:19.333177  497714 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.key
	I1227 10:27:19.333297  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:27:19.333338  497714 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:27:19.333353  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:27:19.333387  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:27:19.333412  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:27:19.333442  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:27:19.333494  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:27:19.334063  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:27:19.355687  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:27:19.377103  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:27:19.397478  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:27:19.426271  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 10:27:19.458603  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:27:19.486297  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:27:19.508784  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 10:27:19.536070  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:27:19.559409  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:27:19.582919  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:27:19.605519  497714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:27:19.619266  497714 ssh_runner.go:195] Run: openssl version
	I1227 10:27:19.625734  497714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:27:19.634084  497714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:27:19.642389  497714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:27:19.646260  497714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:27:19.646378  497714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:27:19.689687  497714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:27:19.697934  497714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:27:19.705327  497714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:27:19.714714  497714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:27:19.718694  497714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:27:19.718763  497714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:27:19.760454  497714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:27:19.768108  497714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:27:19.775553  497714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:27:19.783298  497714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:27:19.787111  497714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:27:19.787177  497714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:27:19.829113  497714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:27:19.837027  497714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:27:19.841040  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:27:19.884430  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:27:19.926492  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:27:19.969445  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:27:20.036516  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:27:20.082941  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:27:20.143782  497714 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:27:20.143883  497714 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:27:20.144015  497714 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:27:20.208793  497714 cri.go:96] found id: "341d59906e656a82c9766c6d3a223e1725be3195ff00ad2041b4404437e3f112"
	I1227 10:27:20.208818  497714 cri.go:96] found id: "b7d4b7ac920dca9cef9cb7f9bfbabba055bba950b9a98343d443ec7b05ee967a"
	I1227 10:27:20.208833  497714 cri.go:96] found id: "ae0f1af189b62b675aaca897e11c40c3b47839880ed544ff4c379f37a3b95b8d"
	I1227 10:27:20.208837  497714 cri.go:96] found id: ""
	I1227 10:27:20.208918  497714 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:27:20.242179  497714 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:27:20Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:27:20.242295  497714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:27:20.256882  497714 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:27:20.256904  497714 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:27:20.256986  497714 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:27:20.268421  497714 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:27:20.268888  497714 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-784377" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:27:20.269033  497714 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-784377" cluster setting kubeconfig missing "default-k8s-diff-port-784377" context setting]
	I1227 10:27:20.269368  497714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:27:20.270732  497714 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:27:20.285350  497714 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 10:27:20.285383  497714 kubeadm.go:602] duration metric: took 28.472828ms to restartPrimaryControlPlane
	I1227 10:27:20.285412  497714 kubeadm.go:403] duration metric: took 141.638892ms to StartCluster
	I1227 10:27:20.285435  497714 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:27:20.285523  497714 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:27:20.286213  497714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:27:20.286455  497714 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:27:20.286870  497714 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:27:20.286941  497714 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-784377"
	I1227 10:27:20.286959  497714 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-784377"
	W1227 10:27:20.286972  497714 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:27:20.286994  497714 host.go:66] Checking if "default-k8s-diff-port-784377" exists ...
	I1227 10:27:20.287811  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:20.288092  497714 config.go:182] Loaded profile config "default-k8s-diff-port-784377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:27:20.288165  497714 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-784377"
	I1227 10:27:20.288183  497714 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-784377"
	W1227 10:27:20.288190  497714 addons.go:248] addon dashboard should already be in state true
	I1227 10:27:20.288237  497714 host.go:66] Checking if "default-k8s-diff-port-784377" exists ...
	I1227 10:27:20.288670  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:20.289112  497714 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-784377"
	I1227 10:27:20.289134  497714 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-784377"
	I1227 10:27:20.289409  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:20.291835  497714 out.go:179] * Verifying Kubernetes components...
	I1227 10:27:20.296276  497714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:27:20.335783  497714 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:27:20.338864  497714 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:27:20.339098  497714 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:27:20.342980  497714 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:27:20.343001  497714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:27:20.343069  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:20.343265  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:27:20.343278  497714 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:27:20.343319  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:20.352010  497714 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-784377"
	W1227 10:27:20.352032  497714 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:27:20.352057  497714 host.go:66] Checking if "default-k8s-diff-port-784377" exists ...
	I1227 10:27:20.352486  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:20.391063  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:20.408171  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:20.410463  497714 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:27:20.410487  497714 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:27:20.410544  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:20.440510  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:20.639568  497714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:27:20.645602  497714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:27:20.655941  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:27:20.655975  497714 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:27:20.684953  497714 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-784377" to be "Ready" ...
	I1227 10:27:20.710880  497714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:27:20.730801  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:27:20.730884  497714 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:27:20.813002  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:27:20.813080  497714 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:27:20.838371  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:27:20.838434  497714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:27:20.860072  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:27:20.860145  497714 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:27:20.872926  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:27:20.872999  497714 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:27:20.885999  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:27:20.886073  497714 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:27:20.961017  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:27:20.961095  497714 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:27:21.003548  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:27:21.003644  497714 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:27:21.020567  497714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:27:23.649802  497714 node_ready.go:49] node "default-k8s-diff-port-784377" is "Ready"
	I1227 10:27:23.649839  497714 node_ready.go:38] duration metric: took 2.964801426s for node "default-k8s-diff-port-784377" to be "Ready" ...
	I1227 10:27:23.649854  497714 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:27:23.649946  497714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:27:25.038844  497714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.393177559s)
	I1227 10:27:25.038915  497714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.327967137s)
	I1227 10:27:25.039164  497714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.011947117s)
	I1227 10:27:25.039384  497714 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.389409258s)
	I1227 10:27:25.039437  497714 api_server.go:72] duration metric: took 4.752949774s to wait for apiserver process to appear ...
	I1227 10:27:25.039451  497714 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:27:25.039468  497714 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1227 10:27:25.043176  497714 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-784377 addons enable metrics-server
	
	I1227 10:27:25.066989  497714 api_server.go:325] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 10:27:25.067023  497714 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 10:27:25.071671  497714 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 10:27:25.074924  497714 addons.go:530] duration metric: took 4.788049327s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 10:27:25.540440  497714 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1227 10:27:25.548751  497714 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1227 10:27:25.549926  497714 api_server.go:141] control plane version: v1.35.0
	I1227 10:27:25.549951  497714 api_server.go:131] duration metric: took 510.493498ms to wait for apiserver health ...
	I1227 10:27:25.549962  497714 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:27:25.553431  497714 system_pods.go:59] 8 kube-system pods found
	I1227 10:27:25.553476  497714 system_pods.go:61] "coredns-7d764666f9-kzx9l" [76a78735-c0bd-4e61-96b8-27aa62f2d606] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:27:25.553486  497714 system_pods.go:61] "etcd-default-k8s-diff-port-784377" [3c119831-097f-402d-84ac-5174f6e07ad1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:27:25.553495  497714 system_pods.go:61] "kindnet-sf4gn" [a46b9960-4c5b-4044-91fe-c24fb6ada404] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:27:25.553505  497714 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-784377" [3dc5cc47-c595-4435-9951-fa7812ebb41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:27:25.553520  497714 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-784377" [888c0c83-4e0a-449d-ad85-f6b8da83749f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:27:25.553533  497714 system_pods.go:61] "kube-proxy-qczcb" [c8664f73-e55c-41d1-b3d6-d8c69735ea44] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:27:25.553540  497714 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-784377" [4bb42d3b-c5d7-40b0-b4d1-8a81f0d2a721] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:27:25.553546  497714 system_pods.go:61] "storage-provisioner" [58d25c76-fba6-4b47-b0f2-3505d7df97db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:27:25.553554  497714 system_pods.go:74] duration metric: took 3.586645ms to wait for pod list to return data ...
	I1227 10:27:25.553582  497714 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:27:25.558582  497714 default_sa.go:45] found service account: "default"
	I1227 10:27:25.558613  497714 default_sa.go:55] duration metric: took 5.024568ms for default service account to be created ...
	I1227 10:27:25.558625  497714 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:27:25.561744  497714 system_pods.go:86] 8 kube-system pods found
	I1227 10:27:25.561781  497714 system_pods.go:89] "coredns-7d764666f9-kzx9l" [76a78735-c0bd-4e61-96b8-27aa62f2d606] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:27:25.561799  497714 system_pods.go:89] "etcd-default-k8s-diff-port-784377" [3c119831-097f-402d-84ac-5174f6e07ad1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:27:25.561807  497714 system_pods.go:89] "kindnet-sf4gn" [a46b9960-4c5b-4044-91fe-c24fb6ada404] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:27:25.561815  497714 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-784377" [3dc5cc47-c595-4435-9951-fa7812ebb41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:27:25.561825  497714 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-784377" [888c0c83-4e0a-449d-ad85-f6b8da83749f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:27:25.561833  497714 system_pods.go:89] "kube-proxy-qczcb" [c8664f73-e55c-41d1-b3d6-d8c69735ea44] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:27:25.561844  497714 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-784377" [4bb42d3b-c5d7-40b0-b4d1-8a81f0d2a721] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:27:25.561853  497714 system_pods.go:89] "storage-provisioner" [58d25c76-fba6-4b47-b0f2-3505d7df97db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:27:25.561861  497714 system_pods.go:126] duration metric: took 3.230588ms to wait for k8s-apps to be running ...
	I1227 10:27:25.561881  497714 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:27:25.561940  497714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:27:25.575660  497714 system_svc.go:56] duration metric: took 13.768254ms WaitForService to wait for kubelet
	I1227 10:27:25.575699  497714 kubeadm.go:587] duration metric: took 5.289202924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:27:25.575719  497714 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:27:25.579854  497714 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:27:25.579891  497714 node_conditions.go:123] node cpu capacity is 2
	I1227 10:27:25.579906  497714 node_conditions.go:105] duration metric: took 4.177026ms to run NodePressure ...
	I1227 10:27:25.579926  497714 start.go:242] waiting for startup goroutines ...
	I1227 10:27:25.579939  497714 start.go:247] waiting for cluster config update ...
	I1227 10:27:25.579952  497714 start.go:256] writing updated cluster config ...
	I1227 10:27:25.580290  497714 ssh_runner.go:195] Run: rm -f paused
	I1227 10:27:25.584843  497714 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:27:25.588777  497714 pod_ready.go:83] waiting for pod "coredns-7d764666f9-kzx9l" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:27:27.608193  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:30.097896  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:32.595138  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:34.595435  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:36.596176  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:39.094360  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:41.095002  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:43.595029  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:46.095903  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:48.594909  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:51.094407  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:53.095326  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:55.594905  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	I1227 10:27:57.095258  497714 pod_ready.go:94] pod "coredns-7d764666f9-kzx9l" is "Ready"
	I1227 10:27:57.095289  497714 pod_ready.go:86] duration metric: took 31.506435247s for pod "coredns-7d764666f9-kzx9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.097585  497714 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.101933  497714 pod_ready.go:94] pod "etcd-default-k8s-diff-port-784377" is "Ready"
	I1227 10:27:57.101964  497714 pod_ready.go:86] duration metric: took 4.354103ms for pod "etcd-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.104425  497714 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.109279  497714 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-784377" is "Ready"
	I1227 10:27:57.109303  497714 pod_ready.go:86] duration metric: took 4.850519ms for pod "kube-apiserver-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.111715  497714 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.292858  497714 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-784377" is "Ready"
	I1227 10:27:57.292888  497714 pod_ready.go:86] duration metric: took 181.146747ms for pod "kube-controller-manager-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.493542  497714 pod_ready.go:83] waiting for pod "kube-proxy-qczcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.892654  497714 pod_ready.go:94] pod "kube-proxy-qczcb" is "Ready"
	I1227 10:27:57.892684  497714 pod_ready.go:86] duration metric: took 399.056171ms for pod "kube-proxy-qczcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:58.093056  497714 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:58.493582  497714 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-784377" is "Ready"
	I1227 10:27:58.493612  497714 pod_ready.go:86] duration metric: took 400.530089ms for pod "kube-scheduler-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:58.493627  497714 pod_ready.go:40] duration metric: took 32.908749675s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:27:58.555001  497714 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:27:58.558201  497714 out.go:203] 
	W1227 10:27:58.561216  497714 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:27:58.564099  497714 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:27:58.567050  497714 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-784377" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.332463673Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.339652521Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.339691889Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.339720394Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.343340566Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.343400563Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.343421568Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.346679562Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.346716091Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.346739173Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.350010607Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.35004689Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.557776167Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b923cfc8-eeb8-4356-b762-42a71881e41d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.559712424Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=63610343-1e5c-4e98-860e-43e67a77eeca name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.562759897Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8/dashboard-metrics-scraper" id=aeaebff9-32e3-45aa-ad00-c58fb032cb98 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.562895661Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.574018998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.575714211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.589818862Z" level=info msg="Created container c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8/dashboard-metrics-scraper" id=aeaebff9-32e3-45aa-ad00-c58fb032cb98 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.592615485Z" level=info msg="Starting container: c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793" id=19695cb3-0359-486d-b30a-f7a77aeb7660 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.596169309Z" level=info msg="Started container" PID=1737 containerID=c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8/dashboard-metrics-scraper id=19695cb3-0359-486d-b30a-f7a77aeb7660 name=/runtime.v1.RuntimeService/StartContainer sandboxID=88e0b402942b41f06db706f23df2695cee85be748b9026b50bb46fe5df6c2d83
	Dec 27 10:28:05 default-k8s-diff-port-784377 conmon[1735]: conmon c0d39842e5d659a17d40 <ninfo>: container 1737 exited with status 1
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.761039068Z" level=info msg="Removing container: 6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa" id=3cffb7dd-9ee7-4081-b4fe-6a1c9556a35f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.772115693Z" level=info msg="Error loading conmon cgroup of container 6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa: cgroup deleted" id=3cffb7dd-9ee7-4081-b4fe-6a1c9556a35f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.779113007Z" level=info msg="Removed container 6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8/dashboard-metrics-scraper" id=3cffb7dd-9ee7-4081-b4fe-6a1c9556a35f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c0d39842e5d65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   88e0b402942b4       dashboard-metrics-scraper-867fb5f87b-xt2l8             kubernetes-dashboard
	ad46cb813a6e5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   4ff304af95bf9       storage-provisioner                                    kube-system
	ab4c87363417a       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago      Running             kubernetes-dashboard        0                   d55810621fb08       kubernetes-dashboard-b84665fb8-v59x7                   kubernetes-dashboard
	257cb8d68427e       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           49 seconds ago      Running             coredns                     1                   02f4bb7a7a27c       coredns-7d764666f9-kzx9l                               kube-system
	db115809dfa3a       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           49 seconds ago      Running             kube-proxy                  1                   79df2d4ea77b3       kube-proxy-qczcb                                       kube-system
	0d1505b051862       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   b33cddb637735       busybox                                                default
	26cd0898742ae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   4ff304af95bf9       storage-provisioner                                    kube-system
	884d1d9a3f6b6       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           49 seconds ago      Running             kindnet-cni                 1                   3e19a4fab7873       kindnet-sf4gn                                          kube-system
	09c17b1ddb55e       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           54 seconds ago      Running             kube-controller-manager     1                   317ccb2367adb       kube-controller-manager-default-k8s-diff-port-784377   kube-system
	341d59906e656       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           54 seconds ago      Running             etcd                        1                   0450984a47bd7       etcd-default-k8s-diff-port-784377                      kube-system
	b7d4b7ac920dc       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           54 seconds ago      Running             kube-apiserver              1                   926ec7ab698ba       kube-apiserver-default-k8s-diff-port-784377            kube-system
	ae0f1af189b62       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           54 seconds ago      Running             kube-scheduler              1                   b4d675c703531       kube-scheduler-default-k8s-diff-port-784377            kube-system
	
	
	==> coredns [257cb8d68427e28263c7b7cfc7c556f6a7f666cc3864d1e63a3754d34f6811c0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59843 - 5201 "HINFO IN 2518711960949654255.8563104438600200251. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.047778196s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-784377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-784377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=default-k8s-diff-port-784377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_26_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:26:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-784377
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:28:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:27:54 +0000   Sat, 27 Dec 2025 10:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:27:54 +0000   Sat, 27 Dec 2025 10:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:27:54 +0000   Sat, 27 Dec 2025 10:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:27:54 +0000   Sat, 27 Dec 2025 10:26:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-784377
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                0c39d998-d532-41c6-a784-b1225108f230
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-7d764666f9-kzx9l                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     102s
	  kube-system                 etcd-default-k8s-diff-port-784377                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         108s
	  kube-system                 kindnet-sf4gn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-default-k8s-diff-port-784377             250m (12%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-784377    200m (10%)    0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-qczcb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-default-k8s-diff-port-784377             100m (5%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-xt2l8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-v59x7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  103s  node-controller  Node default-k8s-diff-port-784377 event: Registered Node default-k8s-diff-port-784377 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node default-k8s-diff-port-784377 event: Registered Node default-k8s-diff-port-784377 in Controller
	
	
	==> dmesg <==
	[Dec27 09:57] overlayfs: idmapped layers are currently not supported
	[Dec27 09:58] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [341d59906e656a82c9766c6d3a223e1725be3195ff00ad2041b4404437e3f112] <==
	{"level":"info","ts":"2025-12-27T10:27:20.479246Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:27:20.479323Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:27:20.482902Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T10:27:20.497887Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:27:20.497935Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:27:20.486182Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:27:20.497968Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:27:21.270311Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:27:21.270434Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:27:21.270505Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:27:21.270545Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:27:21.270585Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:27:21.280014Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:27:21.280118Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:27:21.280186Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:27:21.280226Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:27:21.284153Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-784377 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:27:21.284239Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:27:21.284590Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:27:21.284637Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:27:21.284292Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:27:21.288858Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:27:21.312752Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:27:21.320640Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:27:21.336476Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 10:28:15 up  2:10,  0 user,  load average: 1.63, 1.72, 1.87
	Linux default-k8s-diff-port-784377 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [884d1d9a3f6b6d4e5ce1a17c115ad0f1b2b6ab7bce7608cb12ecf6a2e5c23c23] <==
	I1227 10:27:25.126847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:27:25.136236       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:27:25.136540       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:27:25.136584       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:27:25.136633       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:27:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:27:25.325244       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:27:25.325326       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:27:25.325360       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:27:25.325502       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:27:55.325501       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:27:55.325575       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 10:27:55.325518       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:27:55.325668       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1227 10:27:56.726049       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:27:56.726084       1 metrics.go:72] Registering metrics
	I1227 10:27:56.726167       1 controller.go:711] "Syncing nftables rules"
	I1227 10:28:05.326059       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:28:05.326331       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b7d4b7ac920dca9cef9cb7f9bfbabba055bba950b9a98343d443ec7b05ee967a] <==
	I1227 10:27:23.982965       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:27:23.985935       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1227 10:27:24.009036       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:27:24.016039       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:24.016065       1 policy_source.go:248] refreshing policies
	I1227 10:27:24.016532       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:27:24.032498       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:27:24.052351       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:27:24.052623       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:24.052665       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:24.052685       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:27:24.056168       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:27:24.058648       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:27:24.064323       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:27:24.389749       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:27:24.431043       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:27:24.497711       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:27:24.552370       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:27:24.616942       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:27:24.621942       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:27:24.860514       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.185.18"}
	I1227 10:27:24.902090       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.206.24"}
	I1227 10:27:27.306783       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:27:27.405094       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:27:27.579415       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [09c17b1ddb55eb299dcb7712ef525f6d66206db384cf3265ba341b6fbab81ddb] <==
	I1227 10:27:26.884793       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.884867       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.884936       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885115       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885810       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.886615       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 10:27:26.886753       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-784377"
	I1227 10:27:26.886841       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 10:27:26.885840       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885820       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885827       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885861       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885904       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:27:26.889987       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:27:26.890021       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:27:26.890051       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885925       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885834       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885846       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885853       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.914572       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.966020       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.979197       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.979221       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:27:26.979227       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [db115809dfa3abd66df9cffc3c51771033570d02c2b6cd1553eaa64df166aa8f] <==
	I1227 10:27:25.246694       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:27:25.333536       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:27:25.434629       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:25.434668       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:27:25.434743       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:27:25.453884       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:27:25.453946       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:27:25.457557       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:27:25.457938       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:27:25.458038       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:27:25.461857       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:27:25.461928       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:27:25.461993       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:27:25.465389       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:27:25.462704       1 config.go:309] "Starting node config controller"
	I1227 10:27:25.465407       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:27:25.465413       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:27:25.463262       1 config.go:200] "Starting service config controller"
	I1227 10:27:25.465420       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:27:25.571292       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:27:25.571348       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 10:27:25.573170       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ae0f1af189b62b675aaca897e11c40c3b47839880ed544ff4c379f37a3b95b8d] <==
	I1227 10:27:22.413240       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:27:23.696200       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:27:23.696230       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:27:23.696239       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:27:23.696246       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:27:23.850547       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:27:23.866351       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:27:23.880574       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:27:23.880933       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:27:23.884034       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:27:23.884093       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 10:27:23.955769       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:27:23.973161       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1227 10:27:25.384157       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:27:35 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:35.923695     781 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-784377" containerName="kube-scheduler"
	Dec 27 10:27:36 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:36.661901     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-v59x7" containerName="kubernetes-dashboard"
	Dec 27 10:27:36 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:36.664985     781 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-784377" containerName="kube-scheduler"
	Dec 27 10:27:37 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:37.666907     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-v59x7" containerName="kubernetes-dashboard"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:43.175215     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" containerName="dashboard-metrics-scraper"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:43.175264     781 scope.go:122] "RemoveContainer" containerID="f187d5a7f6fb45554d59e9a757781ec678e0a4d0b173eaee637651087b2261d5"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:43.684270     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" containerName="dashboard-metrics-scraper"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:43.684554     781 scope.go:122] "RemoveContainer" containerID="f187d5a7f6fb45554d59e9a757781ec678e0a4d0b173eaee637651087b2261d5"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:43.684671     781 scope.go:122] "RemoveContainer" containerID="6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:43.684843     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xt2l8_kubernetes-dashboard(8d3afba4-0ff1-4578-af46-9b066ddb1e2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" podUID="8d3afba4-0ff1-4578-af46-9b066ddb1e2b"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:43.705729     781 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-v59x7" podStartSLOduration=8.399024962 podStartE2EDuration="16.705712111s" podCreationTimestamp="2025-12-27 10:27:27 +0000 UTC" firstStartedPulling="2025-12-27 10:27:27.928282451 +0000 UTC m=+8.596342687" lastFinishedPulling="2025-12-27 10:27:36.2349696 +0000 UTC m=+16.903029836" observedRunningTime="2025-12-27 10:27:36.684894128 +0000 UTC m=+17.352954365" watchObservedRunningTime="2025-12-27 10:27:43.705712111 +0000 UTC m=+24.373772365"
	Dec 27 10:27:53 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:53.175511     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" containerName="dashboard-metrics-scraper"
	Dec 27 10:27:53 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:53.175579     781 scope.go:122] "RemoveContainer" containerID="6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa"
	Dec 27 10:27:53 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:53.175765     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xt2l8_kubernetes-dashboard(8d3afba4-0ff1-4578-af46-9b066ddb1e2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" podUID="8d3afba4-0ff1-4578-af46-9b066ddb1e2b"
	Dec 27 10:27:55 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:55.715163     781 scope.go:122] "RemoveContainer" containerID="26cd0898742aefbe3c2ea283eb8b4ba807fc319c4c99f42beca15d9b71897019"
	Dec 27 10:27:56 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:56.617147     781 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-kzx9l" containerName="coredns"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: E1227 10:28:05.557311     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" containerName="dashboard-metrics-scraper"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: I1227 10:28:05.557348     781 scope.go:122] "RemoveContainer" containerID="6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: I1227 10:28:05.746095     781 scope.go:122] "RemoveContainer" containerID="6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: E1227 10:28:05.746384     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" containerName="dashboard-metrics-scraper"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: I1227 10:28:05.746403     781 scope.go:122] "RemoveContainer" containerID="c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: E1227 10:28:05.746546     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xt2l8_kubernetes-dashboard(8d3afba4-0ff1-4578-af46-9b066ddb1e2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" podUID="8d3afba4-0ff1-4578-af46-9b066ddb1e2b"
	Dec 27 10:28:11 default-k8s-diff-port-784377 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:28:11 default-k8s-diff-port-784377 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:28:11 default-k8s-diff-port-784377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ab4c87363417a0b5b98320affadd839b09afd291a3764dcc69becacd5d94a9de] <==
	2025/12/27 10:27:36 Starting overwatch
	2025/12/27 10:27:36 Using namespace: kubernetes-dashboard
	2025/12/27 10:27:36 Using in-cluster config to connect to apiserver
	2025/12/27 10:27:36 Using secret token for csrf signing
	2025/12/27 10:27:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:27:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:27:36 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:27:36 Generating JWE encryption key
	2025/12/27 10:27:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:27:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:27:36 Initializing JWE encryption key from synchronized object
	2025/12/27 10:27:36 Creating in-cluster Sidecar client
	2025/12/27 10:27:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:27:36 Serving insecurely on HTTP port: 9090
	2025/12/27 10:28:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [26cd0898742aefbe3c2ea283eb8b4ba807fc319c4c99f42beca15d9b71897019] <==
	I1227 10:27:25.238015       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:27:55.244313       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ad46cb813a6e5f720bcfeede245df35f45ad054aad108d8e71b257b2fda7fff6] <==
	I1227 10:27:55.760588       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:27:55.773160       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:27:55.773216       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:27:55.780831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:27:59.235726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:03.496709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:07.095046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:10.149287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:13.171828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:13.177116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:28:13.177273       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:28:13.177444       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-784377_092c11e0-c4c7-4427-80c6-acf4931f6180!
	I1227 10:28:13.178922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d0a5a360-e8dc-4c99-8635-c67876792b94", APIVersion:"v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-784377_092c11e0-c4c7-4427-80c6-acf4931f6180 became leader
	W1227 10:28:13.181685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:13.187726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:28:13.278265       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-784377_092c11e0-c4c7-4427-80c6-acf4931f6180!
	W1227 10:28:15.190796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:15.195698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377: exit status 2 (391.22828ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-784377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-784377
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-784377:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94",
	        "Created": "2025-12-27T10:26:10.469840578Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 497843,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:27:12.71478662Z",
	            "FinishedAt": "2025-12-27T10:27:11.928017342Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/hostname",
	        "HostsPath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/hosts",
	        "LogPath": "/var/lib/docker/containers/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94/e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94-json.log",
	        "Name": "/default-k8s-diff-port-784377",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-784377:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-784377",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e19c4a001b93ff684f495ddae4d61b7e3ad0fee4af98b304046e46e3d2872b94",
	                "LowerDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823/merged",
	                "UpperDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823/diff",
	                "WorkDir": "/var/lib/docker/overlay2/669c5319c1d0c59d2ab9d4ad70e7ed637c44fef15c9baf3d78804b946bb1b823/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-784377",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-784377/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-784377",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-784377",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-784377",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08e264ea8c33cff4bc498ce2fbe2bc30a4b1856dc1de78fcde3c8c15cb3be7a1",
	            "SandboxKey": "/var/run/docker/netns/08e264ea8c33",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-784377": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:82:b8:08:55:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8d733bf5719fdacab69d83d9ca4658b4a637aafdad81690293c70d13f01e7f9",
	                    "EndpointID": "f880b87a28672bf5ab9826b19253cccba4f2225f44d3ebb16f5ab5a961313a87",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-784377",
	                        "e19c4a001b93"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377: exit status 2 (384.777663ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-784377 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-784377 logs -n 25: (1.273213656s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-528820       │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:17 UTC │
	│ start   │ -p cert-expiration-528820 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-528820       │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ delete  │ -p cert-expiration-528820                                                                                                                                                                                                                     │ cert-expiration-528820       │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │ 27 Dec 25 10:20 UTC │
	│ start   │ -p force-systemd-flag-915850 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:20 UTC │                     │
	│ delete  │ -p force-systemd-env-193016                                                                                                                                                                                                                   │ force-systemd-env-193016     │ jenkins │ v1.37.0 │ 27 Dec 25 10:22 UTC │ 27 Dec 25 10:22 UTC │
	│ start   │ -p cert-options-810217 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ cert-options-810217 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ ssh     │ -p cert-options-810217 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ delete  │ -p cert-options-810217                                                                                                                                                                                                                        │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │                     │
	│ stop    │ -p old-k8s-version-482317 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-482317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:25 UTC │
	│ image   │ old-k8s-version-482317 image list --format=json                                                                                                                                                                                               │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │ 27 Dec 25 10:25 UTC │
	│ pause   │ -p old-k8s-version-482317 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │                     │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-784377 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-784377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ image   │ default-k8s-diff-port-784377 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ pause   │ -p default-k8s-diff-port-784377 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:27:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:27:12.433195  497714 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:27:12.433388  497714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:27:12.433418  497714 out.go:374] Setting ErrFile to fd 2...
	I1227 10:27:12.433440  497714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:27:12.433711  497714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:27:12.434148  497714 out.go:368] Setting JSON to false
	I1227 10:27:12.435051  497714 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7786,"bootTime":1766823447,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:27:12.435152  497714 start.go:143] virtualization:  
	I1227 10:27:12.439265  497714 out.go:179] * [default-k8s-diff-port-784377] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:27:12.442544  497714 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:27:12.442617  497714 notify.go:221] Checking for updates...
	I1227 10:27:12.448656  497714 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:27:12.451589  497714 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:27:12.456340  497714 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:27:12.459255  497714 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:27:12.463104  497714 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:27:12.466525  497714 config.go:182] Loaded profile config "default-k8s-diff-port-784377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:27:12.467208  497714 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:27:12.507044  497714 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:27:12.507194  497714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:27:12.567276  497714 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:27:12.557964052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:27:12.567380  497714 docker.go:319] overlay module found
	I1227 10:27:12.570566  497714 out.go:179] * Using the docker driver based on existing profile
	I1227 10:27:12.573449  497714 start.go:309] selected driver: docker
	I1227 10:27:12.573468  497714 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:27:12.573587  497714 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:27:12.574271  497714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:27:12.627060  497714 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:27:12.617570699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:27:12.627398  497714 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:27:12.627433  497714 cni.go:84] Creating CNI manager for ""
	I1227 10:27:12.627491  497714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:27:12.627537  497714 start.go:353] cluster config:
	{Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:27:12.630661  497714 out.go:179] * Starting "default-k8s-diff-port-784377" primary control-plane node in "default-k8s-diff-port-784377" cluster
	I1227 10:27:12.633474  497714 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:27:12.636449  497714 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:27:12.639284  497714 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:27:12.639380  497714 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:27:12.639324  497714 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:27:12.639411  497714 cache.go:65] Caching tarball of preloaded images
	I1227 10:27:12.639537  497714 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:27:12.639547  497714 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:27:12.639660  497714 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/config.json ...
	I1227 10:27:12.658926  497714 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:27:12.658945  497714 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:27:12.658966  497714 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:27:12.658998  497714 start.go:360] acquireMachinesLock for default-k8s-diff-port-784377: {Name:mkae337831628ba1f53545c8de178f498d429381 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:27:12.659059  497714 start.go:364] duration metric: took 44.957µs to acquireMachinesLock for "default-k8s-diff-port-784377"
	I1227 10:27:12.659081  497714 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:27:12.659086  497714 fix.go:54] fixHost starting: 
	I1227 10:27:12.659346  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:12.675867  497714 fix.go:112] recreateIfNeeded on default-k8s-diff-port-784377: state=Stopped err=<nil>
	W1227 10:27:12.675897  497714 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:27:12.679229  497714 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-784377" ...
	I1227 10:27:12.679331  497714 cli_runner.go:164] Run: docker start default-k8s-diff-port-784377
	I1227 10:27:12.936023  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:12.958294  497714 kic.go:430] container "default-k8s-diff-port-784377" state is running.
	I1227 10:27:12.959019  497714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-784377
	I1227 10:27:12.983011  497714 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/config.json ...
	I1227 10:27:12.983237  497714 machine.go:94] provisionDockerMachine start ...
	I1227 10:27:12.983308  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:13.007269  497714 main.go:144] libmachine: Using SSH client type: native
	I1227 10:27:13.007622  497714 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 10:27:13.007632  497714 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:27:13.008842  497714 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:27:16.148406  497714 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-784377
	
	I1227 10:27:16.148431  497714 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-784377"
	I1227 10:27:16.148503  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:16.167605  497714 main.go:144] libmachine: Using SSH client type: native
	I1227 10:27:16.167925  497714 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 10:27:16.167945  497714 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-784377 && echo "default-k8s-diff-port-784377" | sudo tee /etc/hostname
	I1227 10:27:16.322139  497714 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-784377
	
	I1227 10:27:16.322289  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:16.340214  497714 main.go:144] libmachine: Using SSH client type: native
	I1227 10:27:16.340546  497714 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 10:27:16.340571  497714 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-784377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-784377/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-784377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:27:16.484433  497714 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:27:16.484466  497714 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:27:16.484497  497714 ubuntu.go:190] setting up certificates
	I1227 10:27:16.484507  497714 provision.go:84] configureAuth start
	I1227 10:27:16.484571  497714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-784377
	I1227 10:27:16.501907  497714 provision.go:143] copyHostCerts
	I1227 10:27:16.501978  497714 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:27:16.501998  497714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:27:16.502085  497714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:27:16.502198  497714 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:27:16.502210  497714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:27:16.502240  497714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:27:16.502310  497714 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:27:16.502319  497714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:27:16.502345  497714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:27:16.502433  497714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-784377 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-784377 localhost minikube]
	I1227 10:27:16.738693  497714 provision.go:177] copyRemoteCerts
	I1227 10:27:16.738767  497714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:27:16.738824  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:16.755754  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:16.855886  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:27:16.873298  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1227 10:27:16.890999  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:27:16.909030  497714 provision.go:87] duration metric: took 424.498692ms to configureAuth
	I1227 10:27:16.909059  497714 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:27:16.909260  497714 config.go:182] Loaded profile config "default-k8s-diff-port-784377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:27:16.909366  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:16.926902  497714 main.go:144] libmachine: Using SSH client type: native
	I1227 10:27:16.927229  497714 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1227 10:27:16.927245  497714 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:27:17.299230  497714 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:27:17.299272  497714 machine.go:97] duration metric: took 4.316004457s to provisionDockerMachine
	I1227 10:27:17.299284  497714 start.go:293] postStartSetup for "default-k8s-diff-port-784377" (driver="docker")
	I1227 10:27:17.299295  497714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:27:17.299356  497714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:27:17.299398  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:17.319229  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:17.420095  497714 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:27:17.423425  497714 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:27:17.423455  497714 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:27:17.423486  497714 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:27:17.423548  497714 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:27:17.423674  497714 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:27:17.423780  497714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:27:17.431052  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:27:17.448519  497714 start.go:296] duration metric: took 149.219464ms for postStartSetup
	I1227 10:27:17.448623  497714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:27:17.448676  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:17.465456  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:17.561042  497714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:27:17.565652  497714 fix.go:56] duration metric: took 4.906559263s for fixHost
	I1227 10:27:17.565682  497714 start.go:83] releasing machines lock for "default-k8s-diff-port-784377", held for 4.90661249s
	I1227 10:27:17.565750  497714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-784377
	I1227 10:27:17.582866  497714 ssh_runner.go:195] Run: cat /version.json
	I1227 10:27:17.582934  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:17.583203  497714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:27:17.583268  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:17.602686  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:17.604828  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:17.699785  497714 ssh_runner.go:195] Run: systemctl --version
	I1227 10:27:17.796528  497714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:27:17.831310  497714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:27:17.835644  497714 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:27:17.835776  497714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:27:17.843669  497714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:27:17.843696  497714 start.go:496] detecting cgroup driver to use...
	I1227 10:27:17.843747  497714 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:27:17.843801  497714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:27:17.859622  497714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:27:17.873045  497714 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:27:17.873126  497714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:27:17.888334  497714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:27:17.901671  497714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:27:18.012837  497714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:27:18.135702  497714 docker.go:234] disabling docker service ...
	I1227 10:27:18.135823  497714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:27:18.151349  497714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:27:18.164676  497714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:27:18.274725  497714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:27:18.394497  497714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:27:18.407215  497714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:27:18.421706  497714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:27:18.421832  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.430900  497714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:27:18.431013  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.439868  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.449603  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.459743  497714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:27:18.472140  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.481636  497714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.499518  497714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:27:18.510371  497714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:27:18.518501  497714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:27:18.529412  497714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:27:18.646930  497714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:27:18.838881  497714 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:27:18.838975  497714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:27:18.842953  497714 start.go:574] Will wait 60s for crictl version
	I1227 10:27:18.843051  497714 ssh_runner.go:195] Run: which crictl
	I1227 10:27:18.846837  497714 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:27:18.872152  497714 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:27:18.872238  497714 ssh_runner.go:195] Run: crio --version
	I1227 10:27:18.903046  497714 ssh_runner.go:195] Run: crio --version
	I1227 10:27:18.935812  497714 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:27:18.938734  497714 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-784377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:27:18.955095  497714 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:27:18.959147  497714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:27:18.968949  497714 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:27:18.969070  497714 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:27:18.969132  497714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:27:19.006596  497714 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:27:19.006624  497714 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:27:19.006696  497714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:27:19.032509  497714 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:27:19.032533  497714 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:27:19.032542  497714 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.35.0 crio true true} ...
	I1227 10:27:19.032642  497714 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-784377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:27:19.032729  497714 ssh_runner.go:195] Run: crio config
	I1227 10:27:19.091645  497714 cni.go:84] Creating CNI manager for ""
	I1227 10:27:19.091679  497714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:27:19.091703  497714 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:27:19.091760  497714 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-784377 NodeName:default-k8s-diff-port-784377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:27:19.091921  497714 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-784377"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:27:19.092065  497714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:27:19.099796  497714 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:27:19.099908  497714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:27:19.107401  497714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1227 10:27:19.120606  497714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:27:19.133806  497714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1227 10:27:19.146997  497714 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:27:19.150955  497714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:27:19.160987  497714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:27:19.312025  497714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:27:19.332704  497714 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377 for IP: 192.168.76.2
	I1227 10:27:19.332729  497714 certs.go:195] generating shared ca certs ...
	I1227 10:27:19.332764  497714 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:27:19.332903  497714 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:27:19.332954  497714 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:27:19.332965  497714 certs.go:257] generating profile certs ...
	I1227 10:27:19.333058  497714 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.key
	I1227 10:27:19.333134  497714 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.key.e1bcd003
	I1227 10:27:19.333177  497714 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.key
	I1227 10:27:19.333297  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:27:19.333338  497714 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:27:19.333353  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:27:19.333387  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:27:19.333412  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:27:19.333442  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:27:19.333494  497714 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:27:19.334063  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:27:19.355687  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:27:19.377103  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:27:19.397478  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:27:19.426271  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1227 10:27:19.458603  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:27:19.486297  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:27:19.508784  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 10:27:19.536070  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:27:19.559409  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:27:19.582919  497714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:27:19.605519  497714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:27:19.619266  497714 ssh_runner.go:195] Run: openssl version
	I1227 10:27:19.625734  497714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:27:19.634084  497714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:27:19.642389  497714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:27:19.646260  497714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:27:19.646378  497714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:27:19.689687  497714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:27:19.697934  497714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:27:19.705327  497714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:27:19.714714  497714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:27:19.718694  497714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:27:19.718763  497714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:27:19.760454  497714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:27:19.768108  497714 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:27:19.775553  497714 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:27:19.783298  497714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:27:19.787111  497714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:27:19.787177  497714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:27:19.829113  497714 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:27:19.837027  497714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:27:19.841040  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:27:19.884430  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:27:19.926492  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:27:19.969445  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:27:20.036516  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:27:20.082941  497714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:27:20.143782  497714 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-784377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-784377 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:27:20.143883  497714 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:27:20.144015  497714 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:27:20.208793  497714 cri.go:96] found id: "341d59906e656a82c9766c6d3a223e1725be3195ff00ad2041b4404437e3f112"
	I1227 10:27:20.208818  497714 cri.go:96] found id: "b7d4b7ac920dca9cef9cb7f9bfbabba055bba950b9a98343d443ec7b05ee967a"
	I1227 10:27:20.208833  497714 cri.go:96] found id: "ae0f1af189b62b675aaca897e11c40c3b47839880ed544ff4c379f37a3b95b8d"
	I1227 10:27:20.208837  497714 cri.go:96] found id: ""
	I1227 10:27:20.208918  497714 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:27:20.242179  497714 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:27:20Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:27:20.242295  497714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:27:20.256882  497714 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:27:20.256904  497714 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:27:20.256986  497714 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:27:20.268421  497714 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:27:20.268888  497714 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-784377" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:27:20.269033  497714 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-784377" cluster setting kubeconfig missing "default-k8s-diff-port-784377" context setting]
	I1227 10:27:20.269368  497714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:27:20.270732  497714 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:27:20.285350  497714 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 10:27:20.285383  497714 kubeadm.go:602] duration metric: took 28.472828ms to restartPrimaryControlPlane
	I1227 10:27:20.285412  497714 kubeadm.go:403] duration metric: took 141.638892ms to StartCluster
	I1227 10:27:20.285435  497714 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:27:20.285523  497714 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:27:20.286213  497714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:27:20.286455  497714 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:27:20.286870  497714 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:27:20.286941  497714 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-784377"
	I1227 10:27:20.286959  497714 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-784377"
	W1227 10:27:20.286972  497714 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:27:20.286994  497714 host.go:66] Checking if "default-k8s-diff-port-784377" exists ...
	I1227 10:27:20.287811  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:20.288092  497714 config.go:182] Loaded profile config "default-k8s-diff-port-784377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:27:20.288165  497714 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-784377"
	I1227 10:27:20.288183  497714 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-784377"
	W1227 10:27:20.288190  497714 addons.go:248] addon dashboard should already be in state true
	I1227 10:27:20.288237  497714 host.go:66] Checking if "default-k8s-diff-port-784377" exists ...
	I1227 10:27:20.288670  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:20.289112  497714 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-784377"
	I1227 10:27:20.289134  497714 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-784377"
	I1227 10:27:20.289409  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:20.291835  497714 out.go:179] * Verifying Kubernetes components...
	I1227 10:27:20.296276  497714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:27:20.335783  497714 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:27:20.338864  497714 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:27:20.339098  497714 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:27:20.342980  497714 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:27:20.343001  497714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:27:20.343069  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:20.343265  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:27:20.343278  497714 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:27:20.343319  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:20.352010  497714 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-784377"
	W1227 10:27:20.352032  497714 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:27:20.352057  497714 host.go:66] Checking if "default-k8s-diff-port-784377" exists ...
	I1227 10:27:20.352486  497714 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-784377 --format={{.State.Status}}
	I1227 10:27:20.391063  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:20.408171  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:20.410463  497714 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:27:20.410487  497714 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:27:20.410544  497714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-784377
	I1227 10:27:20.440510  497714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/default-k8s-diff-port-784377/id_rsa Username:docker}
	I1227 10:27:20.639568  497714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:27:20.645602  497714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:27:20.655941  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:27:20.655975  497714 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:27:20.684953  497714 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-784377" to be "Ready" ...
	I1227 10:27:20.710880  497714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:27:20.730801  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:27:20.730884  497714 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:27:20.813002  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:27:20.813080  497714 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:27:20.838371  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:27:20.838434  497714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:27:20.860072  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:27:20.860145  497714 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:27:20.872926  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:27:20.872999  497714 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:27:20.885999  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:27:20.886073  497714 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:27:20.961017  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:27:20.961095  497714 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:27:21.003548  497714 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:27:21.003644  497714 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:27:21.020567  497714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:27:23.649802  497714 node_ready.go:49] node "default-k8s-diff-port-784377" is "Ready"
	I1227 10:27:23.649839  497714 node_ready.go:38] duration metric: took 2.964801426s for node "default-k8s-diff-port-784377" to be "Ready" ...
	I1227 10:27:23.649854  497714 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:27:23.649946  497714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:27:25.038844  497714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.393177559s)
	I1227 10:27:25.038915  497714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.327967137s)
	I1227 10:27:25.039164  497714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.011947117s)
	I1227 10:27:25.039384  497714 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.389409258s)
	I1227 10:27:25.039437  497714 api_server.go:72] duration metric: took 4.752949774s to wait for apiserver process to appear ...
	I1227 10:27:25.039451  497714 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:27:25.039468  497714 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1227 10:27:25.043176  497714 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-784377 addons enable metrics-server
	
	I1227 10:27:25.066989  497714 api_server.go:325] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 10:27:25.067023  497714 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 10:27:25.071671  497714 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 10:27:25.074924  497714 addons.go:530] duration metric: took 4.788049327s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 10:27:25.540440  497714 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1227 10:27:25.548751  497714 api_server.go:325] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1227 10:27:25.549926  497714 api_server.go:141] control plane version: v1.35.0
	I1227 10:27:25.549951  497714 api_server.go:131] duration metric: took 510.493498ms to wait for apiserver health ...
	I1227 10:27:25.549962  497714 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:27:25.553431  497714 system_pods.go:59] 8 kube-system pods found
	I1227 10:27:25.553476  497714 system_pods.go:61] "coredns-7d764666f9-kzx9l" [76a78735-c0bd-4e61-96b8-27aa62f2d606] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:27:25.553486  497714 system_pods.go:61] "etcd-default-k8s-diff-port-784377" [3c119831-097f-402d-84ac-5174f6e07ad1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:27:25.553495  497714 system_pods.go:61] "kindnet-sf4gn" [a46b9960-4c5b-4044-91fe-c24fb6ada404] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:27:25.553505  497714 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-784377" [3dc5cc47-c595-4435-9951-fa7812ebb41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:27:25.553520  497714 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-784377" [888c0c83-4e0a-449d-ad85-f6b8da83749f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:27:25.553533  497714 system_pods.go:61] "kube-proxy-qczcb" [c8664f73-e55c-41d1-b3d6-d8c69735ea44] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:27:25.553540  497714 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-784377" [4bb42d3b-c5d7-40b0-b4d1-8a81f0d2a721] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:27:25.553546  497714 system_pods.go:61] "storage-provisioner" [58d25c76-fba6-4b47-b0f2-3505d7df97db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:27:25.553554  497714 system_pods.go:74] duration metric: took 3.586645ms to wait for pod list to return data ...
	I1227 10:27:25.553582  497714 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:27:25.558582  497714 default_sa.go:45] found service account: "default"
	I1227 10:27:25.558613  497714 default_sa.go:55] duration metric: took 5.024568ms for default service account to be created ...
	I1227 10:27:25.558625  497714 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:27:25.561744  497714 system_pods.go:86] 8 kube-system pods found
	I1227 10:27:25.561781  497714 system_pods.go:89] "coredns-7d764666f9-kzx9l" [76a78735-c0bd-4e61-96b8-27aa62f2d606] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:27:25.561799  497714 system_pods.go:89] "etcd-default-k8s-diff-port-784377" [3c119831-097f-402d-84ac-5174f6e07ad1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:27:25.561807  497714 system_pods.go:89] "kindnet-sf4gn" [a46b9960-4c5b-4044-91fe-c24fb6ada404] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:27:25.561815  497714 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-784377" [3dc5cc47-c595-4435-9951-fa7812ebb41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:27:25.561825  497714 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-784377" [888c0c83-4e0a-449d-ad85-f6b8da83749f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:27:25.561833  497714 system_pods.go:89] "kube-proxy-qczcb" [c8664f73-e55c-41d1-b3d6-d8c69735ea44] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:27:25.561844  497714 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-784377" [4bb42d3b-c5d7-40b0-b4d1-8a81f0d2a721] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:27:25.561853  497714 system_pods.go:89] "storage-provisioner" [58d25c76-fba6-4b47-b0f2-3505d7df97db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:27:25.561861  497714 system_pods.go:126] duration metric: took 3.230588ms to wait for k8s-apps to be running ...
	I1227 10:27:25.561881  497714 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:27:25.561940  497714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:27:25.575660  497714 system_svc.go:56] duration metric: took 13.768254ms WaitForService to wait for kubelet
	I1227 10:27:25.575699  497714 kubeadm.go:587] duration metric: took 5.289202924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:27:25.575719  497714 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:27:25.579854  497714 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:27:25.579891  497714 node_conditions.go:123] node cpu capacity is 2
	I1227 10:27:25.579906  497714 node_conditions.go:105] duration metric: took 4.177026ms to run NodePressure ...
	I1227 10:27:25.579926  497714 start.go:242] waiting for startup goroutines ...
	I1227 10:27:25.579939  497714 start.go:247] waiting for cluster config update ...
	I1227 10:27:25.579952  497714 start.go:256] writing updated cluster config ...
	I1227 10:27:25.580290  497714 ssh_runner.go:195] Run: rm -f paused
	I1227 10:27:25.584843  497714 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:27:25.588777  497714 pod_ready.go:83] waiting for pod "coredns-7d764666f9-kzx9l" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:27:27.608193  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:30.097896  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:32.595138  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:34.595435  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:36.596176  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:39.094360  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:41.095002  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:43.595029  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:46.095903  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:48.594909  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:51.094407  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:53.095326  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	W1227 10:27:55.594905  497714 pod_ready.go:104] pod "coredns-7d764666f9-kzx9l" is not "Ready", error: <nil>
	I1227 10:27:57.095258  497714 pod_ready.go:94] pod "coredns-7d764666f9-kzx9l" is "Ready"
	I1227 10:27:57.095289  497714 pod_ready.go:86] duration metric: took 31.506435247s for pod "coredns-7d764666f9-kzx9l" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.097585  497714 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.101933  497714 pod_ready.go:94] pod "etcd-default-k8s-diff-port-784377" is "Ready"
	I1227 10:27:57.101964  497714 pod_ready.go:86] duration metric: took 4.354103ms for pod "etcd-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.104425  497714 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.109279  497714 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-784377" is "Ready"
	I1227 10:27:57.109303  497714 pod_ready.go:86] duration metric: took 4.850519ms for pod "kube-apiserver-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.111715  497714 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.292858  497714 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-784377" is "Ready"
	I1227 10:27:57.292888  497714 pod_ready.go:86] duration metric: took 181.146747ms for pod "kube-controller-manager-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.493542  497714 pod_ready.go:83] waiting for pod "kube-proxy-qczcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:57.892654  497714 pod_ready.go:94] pod "kube-proxy-qczcb" is "Ready"
	I1227 10:27:57.892684  497714 pod_ready.go:86] duration metric: took 399.056171ms for pod "kube-proxy-qczcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:58.093056  497714 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:58.493582  497714 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-784377" is "Ready"
	I1227 10:27:58.493612  497714 pod_ready.go:86] duration metric: took 400.530089ms for pod "kube-scheduler-default-k8s-diff-port-784377" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:27:58.493627  497714 pod_ready.go:40] duration metric: took 32.908749675s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:27:58.555001  497714 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:27:58.558201  497714 out.go:203] 
	W1227 10:27:58.561216  497714 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:27:58.564099  497714 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:27:58.567050  497714 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-784377" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.332463673Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.339652521Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.339691889Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.339720394Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.343340566Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.343400563Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.343421568Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.346679562Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.346716091Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.346739173Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.350010607Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.35004689Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.557776167Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b923cfc8-eeb8-4356-b762-42a71881e41d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.559712424Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=63610343-1e5c-4e98-860e-43e67a77eeca name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.562759897Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8/dashboard-metrics-scraper" id=aeaebff9-32e3-45aa-ad00-c58fb032cb98 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.562895661Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.574018998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.575714211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.589818862Z" level=info msg="Created container c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8/dashboard-metrics-scraper" id=aeaebff9-32e3-45aa-ad00-c58fb032cb98 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.592615485Z" level=info msg="Starting container: c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793" id=19695cb3-0359-486d-b30a-f7a77aeb7660 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.596169309Z" level=info msg="Started container" PID=1737 containerID=c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8/dashboard-metrics-scraper id=19695cb3-0359-486d-b30a-f7a77aeb7660 name=/runtime.v1.RuntimeService/StartContainer sandboxID=88e0b402942b41f06db706f23df2695cee85be748b9026b50bb46fe5df6c2d83
	Dec 27 10:28:05 default-k8s-diff-port-784377 conmon[1735]: conmon c0d39842e5d659a17d40 <ninfo>: container 1737 exited with status 1
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.761039068Z" level=info msg="Removing container: 6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa" id=3cffb7dd-9ee7-4081-b4fe-6a1c9556a35f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.772115693Z" level=info msg="Error loading conmon cgroup of container 6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa: cgroup deleted" id=3cffb7dd-9ee7-4081-b4fe-6a1c9556a35f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:28:05 default-k8s-diff-port-784377 crio[653]: time="2025-12-27T10:28:05.779113007Z" level=info msg="Removed container 6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8/dashboard-metrics-scraper" id=3cffb7dd-9ee7-4081-b4fe-6a1c9556a35f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c0d39842e5d65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   88e0b402942b4       dashboard-metrics-scraper-867fb5f87b-xt2l8             kubernetes-dashboard
	ad46cb813a6e5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   4ff304af95bf9       storage-provisioner                                    kube-system
	ab4c87363417a       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago      Running             kubernetes-dashboard        0                   d55810621fb08       kubernetes-dashboard-b84665fb8-v59x7                   kubernetes-dashboard
	257cb8d68427e       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           51 seconds ago      Running             coredns                     1                   02f4bb7a7a27c       coredns-7d764666f9-kzx9l                               kube-system
	db115809dfa3a       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           51 seconds ago      Running             kube-proxy                  1                   79df2d4ea77b3       kube-proxy-qczcb                                       kube-system
	0d1505b051862       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   b33cddb637735       busybox                                                default
	26cd0898742ae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   4ff304af95bf9       storage-provisioner                                    kube-system
	884d1d9a3f6b6       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           52 seconds ago      Running             kindnet-cni                 1                   3e19a4fab7873       kindnet-sf4gn                                          kube-system
	09c17b1ddb55e       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           56 seconds ago      Running             kube-controller-manager     1                   317ccb2367adb       kube-controller-manager-default-k8s-diff-port-784377   kube-system
	341d59906e656       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           56 seconds ago      Running             etcd                        1                   0450984a47bd7       etcd-default-k8s-diff-port-784377                      kube-system
	b7d4b7ac920dc       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           56 seconds ago      Running             kube-apiserver              1                   926ec7ab698ba       kube-apiserver-default-k8s-diff-port-784377            kube-system
	ae0f1af189b62       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           56 seconds ago      Running             kube-scheduler              1                   b4d675c703531       kube-scheduler-default-k8s-diff-port-784377            kube-system
	
	
	==> coredns [257cb8d68427e28263c7b7cfc7c556f6a7f666cc3864d1e63a3754d34f6811c0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59843 - 5201 "HINFO IN 2518711960949654255.8563104438600200251. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.047778196s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-784377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-784377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=default-k8s-diff-port-784377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_26_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:26:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-784377
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:28:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:27:54 +0000   Sat, 27 Dec 2025 10:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:27:54 +0000   Sat, 27 Dec 2025 10:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:27:54 +0000   Sat, 27 Dec 2025 10:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:27:54 +0000   Sat, 27 Dec 2025 10:26:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-784377
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                0c39d998-d532-41c6-a784-b1225108f230
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-kzx9l                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-default-k8s-diff-port-784377                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         111s
	  kube-system                 kindnet-sf4gn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-default-k8s-diff-port-784377             250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-784377    200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-qczcb                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-default-k8s-diff-port-784377             100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-xt2l8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-v59x7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node default-k8s-diff-port-784377 event: Registered Node default-k8s-diff-port-784377 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node default-k8s-diff-port-784377 event: Registered Node default-k8s-diff-port-784377 in Controller
	
	
	==> dmesg <==
	[Dec27 09:57] overlayfs: idmapped layers are currently not supported
	[Dec27 09:58] overlayfs: idmapped layers are currently not supported
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [341d59906e656a82c9766c6d3a223e1725be3195ff00ad2041b4404437e3f112] <==
	{"level":"info","ts":"2025-12-27T10:27:20.479246Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:27:20.479323Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:27:20.482902Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-27T10:27:20.497887Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:27:20.497935Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:27:20.486182Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:27:20.497968Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:27:21.270311Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:27:21.270434Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:27:21.270505Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:27:21.270545Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:27:21.270585Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:27:21.280014Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:27:21.280118Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:27:21.280186Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:27:21.280226Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:27:21.284153Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-784377 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:27:21.284239Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:27:21.284590Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:27:21.284637Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:27:21.284292Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:27:21.288858Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:27:21.312752Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:27:21.320640Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:27:21.336476Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 10:28:17 up  2:10,  0 user,  load average: 1.63, 1.72, 1.87
	Linux default-k8s-diff-port-784377 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [884d1d9a3f6b6d4e5ce1a17c115ad0f1b2b6ab7bce7608cb12ecf6a2e5c23c23] <==
	I1227 10:27:25.126847       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:27:25.136236       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:27:25.136540       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:27:25.136584       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:27:25.136633       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:27:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:27:25.325244       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:27:25.325326       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:27:25.325360       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:27:25.325502       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:27:55.325501       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:27:55.325575       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1227 10:27:55.325518       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:27:55.325668       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1227 10:27:56.726049       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:27:56.726084       1 metrics.go:72] Registering metrics
	I1227 10:27:56.726167       1 controller.go:711] "Syncing nftables rules"
	I1227 10:28:05.326059       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:28:05.326331       1 main.go:301] handling current node
	I1227 10:28:15.333322       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:28:15.333371       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b7d4b7ac920dca9cef9cb7f9bfbabba055bba950b9a98343d443ec7b05ee967a] <==
	I1227 10:27:23.982965       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:27:23.985935       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1227 10:27:24.009036       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:27:24.016039       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:24.016065       1 policy_source.go:248] refreshing policies
	I1227 10:27:24.016532       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:27:24.032498       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:27:24.052351       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1227 10:27:24.052623       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:24.052665       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:24.052685       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:27:24.056168       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:27:24.058648       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:27:24.064323       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:27:24.389749       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:27:24.431043       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:27:24.497711       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:27:24.552370       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:27:24.616942       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:27:24.621942       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:27:24.860514       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.185.18"}
	I1227 10:27:24.902090       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.206.24"}
	I1227 10:27:27.306783       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:27:27.405094       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:27:27.579415       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [09c17b1ddb55eb299dcb7712ef525f6d66206db384cf3265ba341b6fbab81ddb] <==
	I1227 10:27:26.884793       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.884867       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.884936       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885115       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885810       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.886615       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 10:27:26.886753       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-784377"
	I1227 10:27:26.886841       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 10:27:26.885840       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885820       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885827       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885861       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885904       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:27:26.889987       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:27:26.890021       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:27:26.890051       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885925       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885834       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885846       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.885853       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.914572       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.966020       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.979197       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:26.979221       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:27:26.979227       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [db115809dfa3abd66df9cffc3c51771033570d02c2b6cd1553eaa64df166aa8f] <==
	I1227 10:27:25.246694       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:27:25.333536       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:27:25.434629       1 shared_informer.go:377] "Caches are synced"
	I1227 10:27:25.434668       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:27:25.434743       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:27:25.453884       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:27:25.453946       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:27:25.457557       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:27:25.457938       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:27:25.458038       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:27:25.461857       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:27:25.461928       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:27:25.461993       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:27:25.465389       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:27:25.462704       1 config.go:309] "Starting node config controller"
	I1227 10:27:25.465407       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:27:25.465413       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:27:25.463262       1 config.go:200] "Starting service config controller"
	I1227 10:27:25.465420       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:27:25.571292       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:27:25.571348       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1227 10:27:25.573170       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ae0f1af189b62b675aaca897e11c40c3b47839880ed544ff4c379f37a3b95b8d] <==
	I1227 10:27:22.413240       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:27:23.696200       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:27:23.696230       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:27:23.696239       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:27:23.696246       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:27:23.850547       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:27:23.866351       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:27:23.880574       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:27:23.880933       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:27:23.884034       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:27:23.884093       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1227 10:27:23.955769       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:27:23.973161       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	I1227 10:27:25.384157       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:27:35 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:35.923695     781 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-784377" containerName="kube-scheduler"
	Dec 27 10:27:36 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:36.661901     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-v59x7" containerName="kubernetes-dashboard"
	Dec 27 10:27:36 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:36.664985     781 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-784377" containerName="kube-scheduler"
	Dec 27 10:27:37 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:37.666907     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-v59x7" containerName="kubernetes-dashboard"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:43.175215     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" containerName="dashboard-metrics-scraper"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:43.175264     781 scope.go:122] "RemoveContainer" containerID="f187d5a7f6fb45554d59e9a757781ec678e0a4d0b173eaee637651087b2261d5"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:43.684270     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" containerName="dashboard-metrics-scraper"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:43.684554     781 scope.go:122] "RemoveContainer" containerID="f187d5a7f6fb45554d59e9a757781ec678e0a4d0b173eaee637651087b2261d5"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:43.684671     781 scope.go:122] "RemoveContainer" containerID="6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:43.684843     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xt2l8_kubernetes-dashboard(8d3afba4-0ff1-4578-af46-9b066ddb1e2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" podUID="8d3afba4-0ff1-4578-af46-9b066ddb1e2b"
	Dec 27 10:27:43 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:43.705729     781 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-v59x7" podStartSLOduration=8.399024962 podStartE2EDuration="16.705712111s" podCreationTimestamp="2025-12-27 10:27:27 +0000 UTC" firstStartedPulling="2025-12-27 10:27:27.928282451 +0000 UTC m=+8.596342687" lastFinishedPulling="2025-12-27 10:27:36.2349696 +0000 UTC m=+16.903029836" observedRunningTime="2025-12-27 10:27:36.684894128 +0000 UTC m=+17.352954365" watchObservedRunningTime="2025-12-27 10:27:43.705712111 +0000 UTC m=+24.373772365"
	Dec 27 10:27:53 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:53.175511     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" containerName="dashboard-metrics-scraper"
	Dec 27 10:27:53 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:53.175579     781 scope.go:122] "RemoveContainer" containerID="6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa"
	Dec 27 10:27:53 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:53.175765     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xt2l8_kubernetes-dashboard(8d3afba4-0ff1-4578-af46-9b066ddb1e2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" podUID="8d3afba4-0ff1-4578-af46-9b066ddb1e2b"
	Dec 27 10:27:55 default-k8s-diff-port-784377 kubelet[781]: I1227 10:27:55.715163     781 scope.go:122] "RemoveContainer" containerID="26cd0898742aefbe3c2ea283eb8b4ba807fc319c4c99f42beca15d9b71897019"
	Dec 27 10:27:56 default-k8s-diff-port-784377 kubelet[781]: E1227 10:27:56.617147     781 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-kzx9l" containerName="coredns"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: E1227 10:28:05.557311     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" containerName="dashboard-metrics-scraper"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: I1227 10:28:05.557348     781 scope.go:122] "RemoveContainer" containerID="6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: I1227 10:28:05.746095     781 scope.go:122] "RemoveContainer" containerID="6dee58ff89a407026ce888ba4373fbb6c66cc8ff17bedfe166564bd5218d67fa"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: E1227 10:28:05.746384     781 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" containerName="dashboard-metrics-scraper"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: I1227 10:28:05.746403     781 scope.go:122] "RemoveContainer" containerID="c0d39842e5d659a17d409259468eaae4db98e65a38cc3719f0f345fa7cfdc793"
	Dec 27 10:28:05 default-k8s-diff-port-784377 kubelet[781]: E1227 10:28:05.746546     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-xt2l8_kubernetes-dashboard(8d3afba4-0ff1-4578-af46-9b066ddb1e2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-xt2l8" podUID="8d3afba4-0ff1-4578-af46-9b066ddb1e2b"
	Dec 27 10:28:11 default-k8s-diff-port-784377 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:28:11 default-k8s-diff-port-784377 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:28:11 default-k8s-diff-port-784377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ab4c87363417a0b5b98320affadd839b09afd291a3764dcc69becacd5d94a9de] <==
	2025/12/27 10:27:36 Starting overwatch
	2025/12/27 10:27:36 Using namespace: kubernetes-dashboard
	2025/12/27 10:27:36 Using in-cluster config to connect to apiserver
	2025/12/27 10:27:36 Using secret token for csrf signing
	2025/12/27 10:27:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:27:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:27:36 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:27:36 Generating JWE encryption key
	2025/12/27 10:27:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:27:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:27:36 Initializing JWE encryption key from synchronized object
	2025/12/27 10:27:36 Creating in-cluster Sidecar client
	2025/12/27 10:27:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:27:36 Serving insecurely on HTTP port: 9090
	2025/12/27 10:28:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [26cd0898742aefbe3c2ea283eb8b4ba807fc319c4c99f42beca15d9b71897019] <==
	I1227 10:27:25.238015       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:27:55.244313       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ad46cb813a6e5f720bcfeede245df35f45ad054aad108d8e71b257b2fda7fff6] <==
	I1227 10:27:55.760588       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:27:55.773160       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:27:55.773216       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:27:55.780831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:27:59.235726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:03.496709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:07.095046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:10.149287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:13.171828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:13.177116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:28:13.177273       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:28:13.177444       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-784377_092c11e0-c4c7-4427-80c6-acf4931f6180!
	I1227 10:28:13.178922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d0a5a360-e8dc-4c99-8635-c67876792b94", APIVersion:"v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-784377_092c11e0-c4c7-4427-80c6-acf4931f6180 became leader
	W1227 10:28:13.181685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:13.187726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:28:13.278265       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-784377_092c11e0-c4c7-4427-80c6-acf4931f6180!
	W1227 10:28:15.190796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:15.195698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:17.199526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:28:17.205532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377: exit status 2 (379.83326ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-784377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-367691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-367691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (287.324649ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:29:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-367691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-367691 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-367691 describe deploy/metrics-server -n kube-system: exit status 1 (140.997046ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-367691 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-367691
helpers_test.go:244: (dbg) docker inspect embed-certs-367691:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857",
	        "Created": "2025-12-27T10:28:25.951096938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 502308,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:28:26.031644402Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/hostname",
	        "HostsPath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/hosts",
	        "LogPath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857-json.log",
	        "Name": "/embed-certs-367691",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-367691:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-367691",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857",
	                "LowerDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-367691",
	                "Source": "/var/lib/docker/volumes/embed-certs-367691/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-367691",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-367691",
	                "name.minikube.sigs.k8s.io": "embed-certs-367691",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "124d67e212122c90aec8dbce0c0715cb3ab39188914c2a7a64adbb545eb5d435",
	            "SandboxKey": "/var/run/docker/netns/124d67e21212",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-367691": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:b2:15:d2:a3:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d03ce9bfd46e85bbc9765f774251ba284121a67953c86059ad99286cf88212c",
	                    "EndpointID": "e59da0468fa92e48d931813e7c50969216c9618019860b20806feef40247fc6c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-367691",
	                        "d75458839d4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-367691 -n embed-certs-367691
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-367691 -n embed-certs-367691: (1.054041015s)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-367691 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-367691 logs -n 25: (1.520507146s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-810217                                                                                                                                                                                                                        │ cert-options-810217          │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:23 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:23 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-482317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │                     │
	│ stop    │ -p old-k8s-version-482317 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-482317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:25 UTC │
	│ image   │ old-k8s-version-482317 image list --format=json                                                                                                                                                                                               │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │ 27 Dec 25 10:25 UTC │
	│ pause   │ -p old-k8s-version-482317 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │                     │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-784377 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-784377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ image   │ default-k8s-diff-port-784377 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ pause   │ -p default-k8s-diff-port-784377 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ ssh     │ force-systemd-flag-915850 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p force-systemd-flag-915850                                                                                                                                                                                                                  │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p disable-driver-mounts-913868                                                                                                                                                                                                               │ disable-driver-mounts-913868 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-367691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:28:58
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:28:58.429601  505333 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:28:58.430026  505333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:28:58.430317  505333 out.go:374] Setting ErrFile to fd 2...
	I1227 10:28:58.430352  505333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:28:58.430636  505333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:28:58.431101  505333 out.go:368] Setting JSON to false
	I1227 10:28:58.432156  505333 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7892,"bootTime":1766823447,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:28:58.432255  505333 start.go:143] virtualization:  
	I1227 10:28:58.436325  505333 out.go:179] * [no-preload-241090] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:28:58.440724  505333 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:28:58.440823  505333 notify.go:221] Checking for updates...
	I1227 10:28:58.447177  505333 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:28:58.450292  505333 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:28:58.453406  505333 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:28:58.456560  505333 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:28:58.459580  505333 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:28:58.463116  505333 config.go:182] Loaded profile config "embed-certs-367691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:28:58.463227  505333 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:28:58.499098  505333 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:28:58.499245  505333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:28:58.557557  505333 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:28:58.547430525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:28:58.557659  505333 docker.go:319] overlay module found
	I1227 10:28:58.560835  505333 out.go:179] * Using the docker driver based on user configuration
	I1227 10:28:58.563836  505333 start.go:309] selected driver: docker
	I1227 10:28:58.563855  505333 start.go:928] validating driver "docker" against <nil>
	I1227 10:28:58.563869  505333 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:28:58.564612  505333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:28:58.625257  505333 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:28:58.616072459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:28:58.625407  505333 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:28:58.625640  505333 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:28:58.628628  505333 out.go:179] * Using Docker driver with root privileges
	I1227 10:28:58.631544  505333 cni.go:84] Creating CNI manager for ""
	I1227 10:28:58.631624  505333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:28:58.631634  505333 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:28:58.631722  505333 start.go:353] cluster config:
	{Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:28:58.634913  505333 out.go:179] * Starting "no-preload-241090" primary control-plane node in "no-preload-241090" cluster
	I1227 10:28:58.637824  505333 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:28:58.640878  505333 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:28:58.643726  505333 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:28:58.643806  505333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:28:58.643885  505333 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/config.json ...
	I1227 10:28:58.643916  505333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/config.json: {Name:mk346e50667e2944c64a370e5d5938f22f4423b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:28:58.644190  505333 cache.go:107] acquiring lock: {Name:mk20c624f37c3909dde5a8d589ecabaa6d57d038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:28:58.644259  505333 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1227 10:28:58.644275  505333 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.005µs
	I1227 10:28:58.644293  505333 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1227 10:28:58.644306  505333 cache.go:107] acquiring lock: {Name:mkbb24fa4343d0a35603cb19aa6239dff4f2f276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:28:58.644350  505333 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1227 10:28:58.644360  505333 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 55.91µs
	I1227 10:28:58.644366  505333 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1227 10:28:58.644377  505333 cache.go:107] acquiring lock: {Name:mk4c45856071606c8af5d7273166a2f1bb9ddc55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:28:58.644410  505333 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1227 10:28:58.644419  505333 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 43.717µs
	I1227 10:28:58.644426  505333 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1227 10:28:58.644435  505333 cache.go:107] acquiring lock: {Name:mkf9b1edb58a976305f282f57eeb11e80f0b7bb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:28:58.644466  505333 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1227 10:28:58.644475  505333 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 41.002µs
	I1227 10:28:58.644481  505333 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1227 10:28:58.644491  505333 cache.go:107] acquiring lock: {Name:mkf98c62b88cf915fe929ba90cd6ed029cecc870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:28:58.644522  505333 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1227 10:28:58.644541  505333 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 46.679µs
	I1227 10:28:58.644551  505333 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1227 10:28:58.644561  505333 cache.go:107] acquiring lock: {Name:mka12fccf8e2bbc0ccc499614d0ccb8a211e1cb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:28:58.644592  505333 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1227 10:28:58.644601  505333 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.338µs
	I1227 10:28:58.644607  505333 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1227 10:28:58.644617  505333 cache.go:107] acquiring lock: {Name:mk2a8f120e089d53474aed758c34eb39d391985d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:28:58.644753  505333 cache.go:107] acquiring lock: {Name:mk262c37486fa86829e275f8385c93b0718c0ef2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:28:58.644832  505333 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1227 10:28:58.644841  505333 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 98.085µs
	I1227 10:28:58.644848  505333 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1227 10:28:58.644863  505333 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1227 10:28:58.644869  505333 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 252.992µs
	I1227 10:28:58.644875  505333 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1227 10:28:58.644882  505333 cache.go:87] Successfully saved all images to host disk.
	I1227 10:28:58.665114  505333 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:28:58.665139  505333 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:28:58.665160  505333 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:28:58.665193  505333 start.go:360] acquireMachinesLock for no-preload-241090: {Name:mk51902d6c01d44d9c13da3d668b0d82e1b30c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:28:58.666635  505333 start.go:364] duration metric: took 1.414931ms to acquireMachinesLock for "no-preload-241090"
	I1227 10:28:58.666684  505333 start.go:93] Provisioning new machine with config: &{Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:28:58.666773  505333 start.go:125] createHost starting for "" (driver="docker")
	W1227 10:28:57.440450  501861 node_ready.go:57] node "embed-certs-367691" has "Ready":"False" status (will retry)
	W1227 10:28:59.940792  501861 node_ready.go:57] node "embed-certs-367691" has "Ready":"False" status (will retry)
	I1227 10:28:58.672199  505333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:28:58.672463  505333 start.go:159] libmachine.API.Create for "no-preload-241090" (driver="docker")
	I1227 10:28:58.672504  505333 client.go:173] LocalClient.Create starting
	I1227 10:28:58.672585  505333 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:28:58.672623  505333 main.go:144] libmachine: Decoding PEM data...
	I1227 10:28:58.672643  505333 main.go:144] libmachine: Parsing certificate...
	I1227 10:28:58.672696  505333 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:28:58.672721  505333 main.go:144] libmachine: Decoding PEM data...
	I1227 10:28:58.672737  505333 main.go:144] libmachine: Parsing certificate...
	I1227 10:28:58.673124  505333 cli_runner.go:164] Run: docker network inspect no-preload-241090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:28:58.690950  505333 cli_runner.go:211] docker network inspect no-preload-241090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:28:58.691062  505333 network_create.go:284] running [docker network inspect no-preload-241090] to gather additional debugging logs...
	I1227 10:28:58.691098  505333 cli_runner.go:164] Run: docker network inspect no-preload-241090
	W1227 10:28:58.708139  505333 cli_runner.go:211] docker network inspect no-preload-241090 returned with exit code 1
	I1227 10:28:58.708171  505333 network_create.go:287] error running [docker network inspect no-preload-241090]: docker network inspect no-preload-241090: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-241090 not found
	I1227 10:28:58.708184  505333 network_create.go:289] output of [docker network inspect no-preload-241090]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-241090 not found
	
	** /stderr **
	I1227 10:28:58.708291  505333 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:28:58.725524  505333 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:28:58.725976  505333 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:28:58.726280  505333 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:28:58.726635  505333 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8d03ce9bfd46 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:ee:8e:0e:3d:32} reservation:<nil>}
	I1227 10:28:58.727158  505333 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a46d40}
	I1227 10:28:58.727197  505333 network_create.go:124] attempt to create docker network no-preload-241090 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 10:28:58.727268  505333 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-241090 no-preload-241090
	I1227 10:28:58.796009  505333 network_create.go:108] docker network no-preload-241090 192.168.85.0/24 created
	I1227 10:28:58.796041  505333 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-241090" container
	I1227 10:28:58.796136  505333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:28:58.812407  505333 cli_runner.go:164] Run: docker volume create no-preload-241090 --label name.minikube.sigs.k8s.io=no-preload-241090 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:28:58.831865  505333 oci.go:103] Successfully created a docker volume no-preload-241090
	I1227 10:28:58.832007  505333 cli_runner.go:164] Run: docker run --rm --name no-preload-241090-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241090 --entrypoint /usr/bin/test -v no-preload-241090:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:28:59.364060  505333 oci.go:107] Successfully prepared a docker volume no-preload-241090
	I1227 10:28:59.364129  505333 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W1227 10:28:59.364295  505333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:28:59.364395  505333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:28:59.418116  505333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-241090 --name no-preload-241090 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-241090 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-241090 --network no-preload-241090 --ip 192.168.85.2 --volume no-preload-241090:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:28:59.744024  505333 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Running}}
	I1227 10:28:59.770629  505333 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:28:59.794030  505333 cli_runner.go:164] Run: docker exec no-preload-241090 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:28:59.853531  505333 oci.go:144] the created container "no-preload-241090" has a running status.
	I1227 10:28:59.853571  505333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa...
	I1227 10:28:59.965187  505333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:28:59.986980  505333 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:29:00.056201  505333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:29:00.056224  505333 kic_runner.go:114] Args: [docker exec --privileged no-preload-241090 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:29:00.244330  505333 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:29:00.309610  505333 machine.go:94] provisionDockerMachine start ...
	I1227 10:29:00.309724  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:00.341510  505333 main.go:144] libmachine: Using SSH client type: native
	I1227 10:29:00.341898  505333 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 10:29:00.341917  505333 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:29:00.342782  505333 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38236->127.0.0.1:33433: read: connection reset by peer
	W1227 10:29:01.941463  501861 node_ready.go:57] node "embed-certs-367691" has "Ready":"False" status (will retry)
	W1227 10:29:04.440710  501861 node_ready.go:57] node "embed-certs-367691" has "Ready":"False" status (will retry)
	I1227 10:29:03.491572  505333 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-241090
	
	I1227 10:29:03.491599  505333 ubuntu.go:182] provisioning hostname "no-preload-241090"
	I1227 10:29:03.491673  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:03.515206  505333 main.go:144] libmachine: Using SSH client type: native
	I1227 10:29:03.515535  505333 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 10:29:03.515553  505333 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-241090 && echo "no-preload-241090" | sudo tee /etc/hostname
	I1227 10:29:03.675820  505333 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-241090
	
	I1227 10:29:03.675937  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:03.697156  505333 main.go:144] libmachine: Using SSH client type: native
	I1227 10:29:03.697482  505333 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 10:29:03.697499  505333 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241090' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241090/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241090' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:29:03.836412  505333 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:29:03.836458  505333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:29:03.836478  505333 ubuntu.go:190] setting up certificates
	I1227 10:29:03.836487  505333 provision.go:84] configureAuth start
	I1227 10:29:03.836551  505333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241090
	I1227 10:29:03.854812  505333 provision.go:143] copyHostCerts
	I1227 10:29:03.854899  505333 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:29:03.854913  505333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:29:03.854989  505333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:29:03.855117  505333 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:29:03.855129  505333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:29:03.855157  505333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:29:03.855225  505333 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:29:03.855236  505333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:29:03.855263  505333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:29:03.855323  505333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.no-preload-241090 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-241090]
	I1227 10:29:04.034404  505333 provision.go:177] copyRemoteCerts
	I1227 10:29:04.034517  505333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:29:04.034588  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:04.053242  505333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:29:04.156416  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:29:04.176116  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:29:04.196323  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:29:04.214601  505333 provision.go:87] duration metric: took 378.100204ms to configureAuth
	I1227 10:29:04.214629  505333 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:29:04.214826  505333 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:29:04.214960  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:04.232674  505333 main.go:144] libmachine: Using SSH client type: native
	I1227 10:29:04.232989  505333 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1227 10:29:04.233007  505333 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:29:04.533428  505333 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:29:04.533454  505333 machine.go:97] duration metric: took 4.223819102s to provisionDockerMachine
	I1227 10:29:04.533466  505333 client.go:176] duration metric: took 5.86095186s to LocalClient.Create
	I1227 10:29:04.533516  505333 start.go:167] duration metric: took 5.861019282s to libmachine.API.Create "no-preload-241090"
	I1227 10:29:04.533533  505333 start.go:293] postStartSetup for "no-preload-241090" (driver="docker")
	I1227 10:29:04.533543  505333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:29:04.533657  505333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:29:04.533726  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:04.551779  505333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:29:04.660383  505333 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:29:04.664010  505333 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:29:04.664041  505333 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:29:04.664054  505333 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:29:04.664114  505333 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:29:04.664196  505333 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:29:04.664304  505333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:29:04.671978  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:29:04.690233  505333 start.go:296] duration metric: took 156.684293ms for postStartSetup
	I1227 10:29:04.690678  505333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241090
	I1227 10:29:04.710260  505333 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/config.json ...
	I1227 10:29:04.710553  505333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:29:04.710595  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:04.730203  505333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:29:04.824940  505333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:29:04.829606  505333 start.go:128] duration metric: took 6.16281898s to createHost
	I1227 10:29:04.829635  505333 start.go:83] releasing machines lock for "no-preload-241090", held for 6.162975019s
	I1227 10:29:04.829723  505333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241090
	I1227 10:29:04.847383  505333 ssh_runner.go:195] Run: cat /version.json
	I1227 10:29:04.847447  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:04.847748  505333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:29:04.847870  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:04.866727  505333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:29:04.878558  505333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:29:05.066529  505333 ssh_runner.go:195] Run: systemctl --version
	I1227 10:29:05.073210  505333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:29:05.110034  505333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:29:05.114492  505333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:29:05.114581  505333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:29:05.144597  505333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:29:05.144626  505333 start.go:496] detecting cgroup driver to use...
	I1227 10:29:05.144689  505333 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:29:05.144759  505333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:29:05.164759  505333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:29:05.178393  505333 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:29:05.178477  505333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:29:05.196329  505333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:29:05.214962  505333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:29:05.346101  505333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:29:05.476054  505333 docker.go:234] disabling docker service ...
	I1227 10:29:05.476171  505333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:29:05.498132  505333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:29:05.512559  505333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:29:05.654616  505333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:29:05.781917  505333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:29:05.795025  505333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:29:05.810340  505333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:29:05.810413  505333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:05.820739  505333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:29:05.820827  505333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:05.830540  505333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:05.840771  505333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:05.849887  505333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:29:05.858619  505333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:05.867657  505333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:05.881931  505333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:05.890916  505333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:29:05.898574  505333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:29:05.906311  505333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:29:06.077842  505333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:29:06.251322  505333 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:29:06.251433  505333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:29:06.255422  505333 start.go:574] Will wait 60s for crictl version
	I1227 10:29:06.255490  505333 ssh_runner.go:195] Run: which crictl
	I1227 10:29:06.259186  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:29:06.285715  505333 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:29:06.285854  505333 ssh_runner.go:195] Run: crio --version
	I1227 10:29:06.344385  505333 ssh_runner.go:195] Run: crio --version
	I1227 10:29:06.396877  505333 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:29:06.401563  505333 cli_runner.go:164] Run: docker network inspect no-preload-241090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:29:06.428068  505333 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:29:06.433074  505333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:29:06.449560  505333 kubeadm.go:884] updating cluster {Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:29:06.449684  505333 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:29:06.449733  505333 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:29:06.498791  505333 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I1227 10:29:06.498819  505333 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1227 10:29:06.498892  505333 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:29:06.499127  505333 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:29:06.499233  505333 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:29:06.499322  505333 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:29:06.499410  505333 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:29:06.499513  505333 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1227 10:29:06.499601  505333 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1227 10:29:06.499700  505333 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:29:06.500688  505333 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1227 10:29:06.501450  505333 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1227 10:29:06.501663  505333 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:29:06.501846  505333 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:29:06.502025  505333 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:29:06.502214  505333 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:29:06.502586  505333 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:29:06.504445  505333 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:29:06.837360  505333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:29:06.845947  505333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1227 10:29:06.852028  505333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:29:06.862862  505333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:29:06.872123  505333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:29:06.874846  505333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:29:06.877151  505333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1227 10:29:06.903074  505333 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5" in container runtime
	I1227 10:29:06.903173  505333 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:29:06.903245  505333 ssh_runner.go:195] Run: which crictl
	I1227 10:29:06.929600  505333 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1227 10:29:06.929640  505333 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I1227 10:29:06.929750  505333 ssh_runner.go:195] Run: which crictl
	I1227 10:29:06.983080  505333 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856" in container runtime
	I1227 10:29:06.983181  505333 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:29:06.983263  505333 ssh_runner.go:195] Run: which crictl
	I1227 10:29:07.003627  505333 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0" in container runtime
	I1227 10:29:07.003771  505333 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:29:07.003865  505333 ssh_runner.go:195] Run: which crictl
	I1227 10:29:07.021067  505333 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1227 10:29:07.021406  505333 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:29:07.021465  505333 ssh_runner.go:195] Run: which crictl
	I1227 10:29:07.021156  505333 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f" in container runtime
	I1227 10:29:07.021603  505333 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:29:07.021211  505333 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1227 10:29:07.021696  505333 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1227 10:29:07.021734  505333 ssh_runner.go:195] Run: which crictl
	I1227 10:29:07.021295  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:29:07.021331  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:29:07.021337  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 10:29:07.021376  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:29:07.021899  505333 ssh_runner.go:195] Run: which crictl
	I1227 10:29:07.029056  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:29:07.035458  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 10:29:07.105117  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:29:07.105204  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 10:29:07.105267  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:29:07.105330  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:29:07.121347  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:29:07.121511  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:29:07.133469  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 10:29:07.210251  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:29:07.210362  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I1227 10:29:07.210446  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1227 10:29:07.210521  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I1227 10:29:07.254313  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I1227 10:29:07.254360  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1227 10:29:07.258944  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1227 10:29:07.322918  505333 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0
	I1227 10:29:07.323114  505333 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0
	I1227 10:29:07.323181  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1227 10:29:07.323223  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 10:29:07.323071  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I1227 10:29:07.323285  505333 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1227 10:29:07.323407  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1227 10:29:07.373057  505333 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0
	I1227 10:29:07.373336  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 10:29:07.373159  505333 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1227 10:29:07.373485  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1227 10:29:07.373218  505333 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1227 10:29:07.373603  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1227 10:29:07.400132  505333 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1227 10:29:07.400179  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1227 10:29:07.400321  505333 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I1227 10:29:07.400366  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (24702976 bytes)
	I1227 10:29:07.400494  505333 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0
	I1227 10:29:07.400603  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 10:29:07.400764  505333 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I1227 10:29:07.400807  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (20682752 bytes)
	I1227 10:29:07.401258  505333 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1227 10:29:07.401328  505333 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I1227 10:29:07.401434  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (22434816 bytes)
	I1227 10:29:07.401354  505333 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1227 10:29:07.401517  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1227 10:29:07.401398  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1227 10:29:07.464355  505333 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I1227 10:29:07.464401  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (15415808 bytes)
	I1227 10:29:07.573107  505333 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1227 10:29:07.573185  505333 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1227 10:29:07.707312  505333 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1227 10:29:07.707487  505333 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:29:08.084260  505333 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1227 10:29:08.084315  505333 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:29:08.084382  505333 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1227 10:29:08.084569  505333 ssh_runner.go:195] Run: which crictl
	I1227 10:29:08.125189  505333 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 10:29:08.125274  505333 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I1227 10:29:08.172372  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:29:06.440606  501861 node_ready.go:49] node "embed-certs-367691" is "Ready"
	I1227 10:29:06.440632  501861 node_ready.go:38] duration metric: took 13.503876013s for node "embed-certs-367691" to be "Ready" ...
	I1227 10:29:06.440645  501861 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:29:06.440703  501861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:29:06.467523  501861 api_server.go:72] duration metric: took 14.532776571s to wait for apiserver process to appear ...
	I1227 10:29:06.467547  501861 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:29:06.467567  501861 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:29:06.480395  501861 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:29:06.481994  501861 api_server.go:141] control plane version: v1.35.0
	I1227 10:29:06.482019  501861 api_server.go:131] duration metric: took 14.464696ms to wait for apiserver health ...
	I1227 10:29:06.482028  501861 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:29:06.486763  501861 system_pods.go:59] 8 kube-system pods found
	I1227 10:29:06.486798  501861 system_pods.go:61] "coredns-7d764666f9-t88nq" [6209d048-3dca-4ad1-849b-159b1b571154] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:06.486805  501861 system_pods.go:61] "etcd-embed-certs-367691" [05f20b28-c7d2-4dbb-a7b0-967ce049635e] Running
	I1227 10:29:06.486811  501861 system_pods.go:61] "kindnet-8pr87" [77655848-4bef-4fdb-af7c-7f4bf3d0309b] Running
	I1227 10:29:06.486816  501861 system_pods.go:61] "kube-apiserver-embed-certs-367691" [c535e1b3-fee8-4461-8b61-233aaa8495d5] Running
	I1227 10:29:06.486823  501861 system_pods.go:61] "kube-controller-manager-embed-certs-367691" [e1b8075e-1716-4ea6-88f9-462b0aff4cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:29:06.486827  501861 system_pods.go:61] "kube-proxy-rpjg8" [a721f14e-75c6-4caf-91f8-0e5d13c01982] Running
	I1227 10:29:06.486832  501861 system_pods.go:61] "kube-scheduler-embed-certs-367691" [40e24817-0b4b-4d8f-b43b-7fbd4a5f42fc] Running
	I1227 10:29:06.486844  501861 system_pods.go:61] "storage-provisioner" [554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:29:06.486850  501861 system_pods.go:74] duration metric: took 4.816155ms to wait for pod list to return data ...
	I1227 10:29:06.486858  501861 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:29:06.490053  501861 default_sa.go:45] found service account: "default"
	I1227 10:29:06.490132  501861 default_sa.go:55] duration metric: took 3.268119ms for default service account to be created ...
	I1227 10:29:06.490159  501861 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:29:06.506143  501861 system_pods.go:86] 8 kube-system pods found
	I1227 10:29:06.506225  501861 system_pods.go:89] "coredns-7d764666f9-t88nq" [6209d048-3dca-4ad1-849b-159b1b571154] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:06.506258  501861 system_pods.go:89] "etcd-embed-certs-367691" [05f20b28-c7d2-4dbb-a7b0-967ce049635e] Running
	I1227 10:29:06.506284  501861 system_pods.go:89] "kindnet-8pr87" [77655848-4bef-4fdb-af7c-7f4bf3d0309b] Running
	I1227 10:29:06.506315  501861 system_pods.go:89] "kube-apiserver-embed-certs-367691" [c535e1b3-fee8-4461-8b61-233aaa8495d5] Running
	I1227 10:29:06.506338  501861 system_pods.go:89] "kube-controller-manager-embed-certs-367691" [e1b8075e-1716-4ea6-88f9-462b0aff4cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:29:06.506365  501861 system_pods.go:89] "kube-proxy-rpjg8" [a721f14e-75c6-4caf-91f8-0e5d13c01982] Running
	I1227 10:29:06.506401  501861 system_pods.go:89] "kube-scheduler-embed-certs-367691" [40e24817-0b4b-4d8f-b43b-7fbd4a5f42fc] Running
	I1227 10:29:06.506425  501861 system_pods.go:89] "storage-provisioner" [554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:29:06.506491  501861 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 10:29:06.725058  501861 system_pods.go:86] 8 kube-system pods found
	I1227 10:29:06.725105  501861 system_pods.go:89] "coredns-7d764666f9-t88nq" [6209d048-3dca-4ad1-849b-159b1b571154] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:06.725113  501861 system_pods.go:89] "etcd-embed-certs-367691" [05f20b28-c7d2-4dbb-a7b0-967ce049635e] Running
	I1227 10:29:06.725120  501861 system_pods.go:89] "kindnet-8pr87" [77655848-4bef-4fdb-af7c-7f4bf3d0309b] Running
	I1227 10:29:06.725126  501861 system_pods.go:89] "kube-apiserver-embed-certs-367691" [c535e1b3-fee8-4461-8b61-233aaa8495d5] Running
	I1227 10:29:06.725135  501861 system_pods.go:89] "kube-controller-manager-embed-certs-367691" [e1b8075e-1716-4ea6-88f9-462b0aff4cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:29:06.725140  501861 system_pods.go:89] "kube-proxy-rpjg8" [a721f14e-75c6-4caf-91f8-0e5d13c01982] Running
	I1227 10:29:06.725145  501861 system_pods.go:89] "kube-scheduler-embed-certs-367691" [40e24817-0b4b-4d8f-b43b-7fbd4a5f42fc] Running
	I1227 10:29:06.725159  501861 system_pods.go:89] "storage-provisioner" [554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:29:06.974256  501861 system_pods.go:86] 8 kube-system pods found
	I1227 10:29:06.974292  501861 system_pods.go:89] "coredns-7d764666f9-t88nq" [6209d048-3dca-4ad1-849b-159b1b571154] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:06.974300  501861 system_pods.go:89] "etcd-embed-certs-367691" [05f20b28-c7d2-4dbb-a7b0-967ce049635e] Running
	I1227 10:29:06.974307  501861 system_pods.go:89] "kindnet-8pr87" [77655848-4bef-4fdb-af7c-7f4bf3d0309b] Running
	I1227 10:29:06.974317  501861 system_pods.go:89] "kube-apiserver-embed-certs-367691" [c535e1b3-fee8-4461-8b61-233aaa8495d5] Running
	I1227 10:29:06.974325  501861 system_pods.go:89] "kube-controller-manager-embed-certs-367691" [e1b8075e-1716-4ea6-88f9-462b0aff4cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:29:06.974337  501861 system_pods.go:89] "kube-proxy-rpjg8" [a721f14e-75c6-4caf-91f8-0e5d13c01982] Running
	I1227 10:29:06.974342  501861 system_pods.go:89] "kube-scheduler-embed-certs-367691" [40e24817-0b4b-4d8f-b43b-7fbd4a5f42fc] Running
	I1227 10:29:06.974348  501861 system_pods.go:89] "storage-provisioner" [554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:29:07.425242  501861 system_pods.go:86] 8 kube-system pods found
	I1227 10:29:07.425276  501861 system_pods.go:89] "coredns-7d764666f9-t88nq" [6209d048-3dca-4ad1-849b-159b1b571154] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:07.425284  501861 system_pods.go:89] "etcd-embed-certs-367691" [05f20b28-c7d2-4dbb-a7b0-967ce049635e] Running
	I1227 10:29:07.425289  501861 system_pods.go:89] "kindnet-8pr87" [77655848-4bef-4fdb-af7c-7f4bf3d0309b] Running
	I1227 10:29:07.425294  501861 system_pods.go:89] "kube-apiserver-embed-certs-367691" [c535e1b3-fee8-4461-8b61-233aaa8495d5] Running
	I1227 10:29:07.425303  501861 system_pods.go:89] "kube-controller-manager-embed-certs-367691" [e1b8075e-1716-4ea6-88f9-462b0aff4cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:29:07.425308  501861 system_pods.go:89] "kube-proxy-rpjg8" [a721f14e-75c6-4caf-91f8-0e5d13c01982] Running
	I1227 10:29:07.425313  501861 system_pods.go:89] "kube-scheduler-embed-certs-367691" [40e24817-0b4b-4d8f-b43b-7fbd4a5f42fc] Running
	I1227 10:29:07.425317  501861 system_pods.go:89] "storage-provisioner" [554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf] Running
	I1227 10:29:07.425324  501861 system_pods.go:126] duration metric: took 935.140139ms to wait for k8s-apps to be running ...
	I1227 10:29:07.425332  501861 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:29:07.425383  501861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:29:07.475499  501861 system_svc.go:56] duration metric: took 50.158309ms WaitForService to wait for kubelet
	I1227 10:29:07.475528  501861 kubeadm.go:587] duration metric: took 15.540786863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:29:07.475546  501861 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:29:07.494848  501861 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:29:07.494932  501861 node_conditions.go:123] node cpu capacity is 2
	I1227 10:29:07.494960  501861 node_conditions.go:105] duration metric: took 19.408271ms to run NodePressure ...
	I1227 10:29:07.494988  501861 start.go:242] waiting for startup goroutines ...
	I1227 10:29:07.495019  501861 start.go:247] waiting for cluster config update ...
	I1227 10:29:07.495051  501861 start.go:256] writing updated cluster config ...
	I1227 10:29:07.495382  501861 ssh_runner.go:195] Run: rm -f paused
	I1227 10:29:07.503940  501861 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:29:07.524325  501861 pod_ready.go:83] waiting for pod "coredns-7d764666f9-t88nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:07.566663  501861 pod_ready.go:94] pod "coredns-7d764666f9-t88nq" is "Ready"
	I1227 10:29:07.566739  501861 pod_ready.go:86] duration metric: took 42.386037ms for pod "coredns-7d764666f9-t88nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:07.623645  501861 pod_ready.go:83] waiting for pod "etcd-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:07.629886  501861 pod_ready.go:94] pod "etcd-embed-certs-367691" is "Ready"
	I1227 10:29:07.629963  501861 pod_ready.go:86] duration metric: took 6.248917ms for pod "etcd-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:07.632875  501861 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:07.645778  501861 pod_ready.go:94] pod "kube-apiserver-embed-certs-367691" is "Ready"
	I1227 10:29:07.645853  501861 pod_ready.go:86] duration metric: took 12.908997ms for pod "kube-apiserver-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:07.649599  501861 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:07.912947  501861 pod_ready.go:94] pod "kube-controller-manager-embed-certs-367691" is "Ready"
	I1227 10:29:07.912971  501861 pod_ready.go:86] duration metric: took 263.304368ms for pod "kube-controller-manager-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:08.110452  501861 pod_ready.go:83] waiting for pod "kube-proxy-rpjg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:08.511263  501861 pod_ready.go:94] pod "kube-proxy-rpjg8" is "Ready"
	I1227 10:29:08.511304  501861 pod_ready.go:86] duration metric: took 400.825734ms for pod "kube-proxy-rpjg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:08.710134  501861 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:09.110259  501861 pod_ready.go:94] pod "kube-scheduler-embed-certs-367691" is "Ready"
	I1227 10:29:09.110374  501861 pod_ready.go:86] duration metric: took 400.216203ms for pod "kube-scheduler-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:09.110479  501861 pod_ready.go:40] duration metric: took 1.606492149s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:29:09.188030  501861 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:29:09.191952  501861 out.go:203] 
	W1227 10:29:09.195093  501861 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:29:09.199255  501861 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:29:09.202266  501861 out.go:179] * Done! kubectl is now configured to use "embed-certs-367691" cluster and "default" namespace by default
	I1227 10:29:09.905526  505333 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.780218741s)
	I1227 10:29:09.905553  505333 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I1227 10:29:09.905573  505333 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1227 10:29:09.905579  505333 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.733175089s)
	I1227 10:29:09.905620  505333 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I1227 10:29:09.905636  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:29:11.144663  505333 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.239004501s)
	I1227 10:29:11.144753  505333 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:29:11.144820  505333 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.239179247s)
	I1227 10:29:11.144848  505333 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I1227 10:29:11.144876  505333 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1227 10:29:11.144933  505333 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1227 10:29:13.049244  505333 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.904290193s)
	I1227 10:29:13.049274  505333 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1227 10:29:13.049294  505333 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 10:29:13.049341  505333 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I1227 10:29:13.049425  505333 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.904661324s)
	I1227 10:29:13.049462  505333 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1227 10:29:13.049533  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1227 10:29:14.443928  505333 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.394559784s)
	I1227 10:29:14.443959  505333 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I1227 10:29:14.444029  505333 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1227 10:29:14.444085  505333 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1227 10:29:14.444181  505333 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.394633778s)
	I1227 10:29:14.444202  505333 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1227 10:29:14.444224  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1227 10:29:15.815037  505333 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.370925305s)
	I1227 10:29:15.815071  505333 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1227 10:29:15.815096  505333 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 10:29:15.815146  505333 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I1227 10:29:17.165353  505333 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.350180965s)
	I1227 10:29:17.165384  505333 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I1227 10:29:17.165402  505333 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1227 10:29:17.165467  505333 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1227 10:29:17.885371  505333 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1227 10:29:17.885405  505333 cache_images.go:125] Successfully loaded all cached images
	I1227 10:29:17.885410  505333 cache_images.go:94] duration metric: took 11.386579922s to LoadCachedImages
	I1227 10:29:17.885423  505333 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 10:29:17.885518  505333 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-241090 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:29:17.885595  505333 ssh_runner.go:195] Run: crio config
	I1227 10:29:17.968240  505333 cni.go:84] Creating CNI manager for ""
	I1227 10:29:17.968266  505333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:29:17.968287  505333 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:29:17.968312  505333 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241090 NodeName:no-preload-241090 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:29:17.968462  505333 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-241090"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:29:17.968551  505333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:29:17.979934  505333 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I1227 10:29:17.980039  505333 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I1227 10:29:17.989522  505333 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
	I1227 10:29:17.989604  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I1227 10:29:17.989684  505333 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet.sha256
	I1227 10:29:17.989714  505333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:29:17.989797  505333 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm.sha256
	I1227 10:29:17.989840  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I1227 10:29:18.012406  505333 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I1227 10:29:18.012444  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/linux/arm64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (68354232 bytes)
	I1227 10:29:18.012518  505333 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I1227 10:29:18.012541  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/linux/arm64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (55247032 bytes)
	I1227 10:29:18.012653  505333 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I1227 10:29:18.056057  505333 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I1227 10:29:18.056091  505333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/cache/linux/arm64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (54329636 bytes)
	
	
	==> CRI-O <==
	Dec 27 10:29:06 embed-certs-367691 crio[836]: time="2025-12-27T10:29:06.447322453Z" level=info msg="Created container ea8cf32c27f7cff954b11f30d977f7667c99fa60e2c6a75c2ae88f25b61d0300: kube-system/coredns-7d764666f9-t88nq/coredns" id=8f95fd57-0be2-4f95-b514-c6dd71eb0eea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:29:06 embed-certs-367691 crio[836]: time="2025-12-27T10:29:06.4591139Z" level=info msg="Starting container: ea8cf32c27f7cff954b11f30d977f7667c99fa60e2c6a75c2ae88f25b61d0300" id=5352c3da-3596-45d9-9fed-8da06c323c06 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:29:06 embed-certs-367691 crio[836]: time="2025-12-27T10:29:06.4781952Z" level=info msg="Started container" PID=1762 containerID=ea8cf32c27f7cff954b11f30d977f7667c99fa60e2c6a75c2ae88f25b61d0300 description=kube-system/coredns-7d764666f9-t88nq/coredns id=5352c3da-3596-45d9-9fed-8da06c323c06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c34afefdf0690871ca98d32a4d89311b88deed0094a2f508895f81985c071490
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.758476918Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b8c872df-9fad-4e1a-b8e0-5fc4d1598342 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.758599086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.772465047Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b2c152877c289ba762805cc973dfad2004cfc9eaca0394e495e609c17d2428dd UID:dc858700-a966-41a6-8e94-31faef3ddea6 NetNS:/var/run/netns/9fb2053f-1099-4849-a0f1-99936e77a0eb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012d0280}] Aliases:map[]}"
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.772763635Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.784485592Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b2c152877c289ba762805cc973dfad2004cfc9eaca0394e495e609c17d2428dd UID:dc858700-a966-41a6-8e94-31faef3ddea6 NetNS:/var/run/netns/9fb2053f-1099-4849-a0f1-99936e77a0eb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012d0280}] Aliases:map[]}"
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.784786239Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.792299693Z" level=info msg="Ran pod sandbox b2c152877c289ba762805cc973dfad2004cfc9eaca0394e495e609c17d2428dd with infra container: default/busybox/POD" id=b8c872df-9fad-4e1a-b8e0-5fc4d1598342 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.793973729Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0bbae4c-4d23-4fe0-800d-e51d2cb923d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.794207233Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f0bbae4c-4d23-4fe0-800d-e51d2cb923d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.794310791Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f0bbae4c-4d23-4fe0-800d-e51d2cb923d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.798260566Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=823c9aef-8d49-4ca5-bf56-4ee28f11da20 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:29:09 embed-certs-367691 crio[836]: time="2025-12-27T10:29:09.80141332Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 10:29:11 embed-certs-367691 crio[836]: time="2025-12-27T10:29:11.945335707Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=823c9aef-8d49-4ca5-bf56-4ee28f11da20 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:29:11 embed-certs-367691 crio[836]: time="2025-12-27T10:29:11.946087066Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=52506039-1d66-4812-b191-4ba4589fce53 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:29:11 embed-certs-367691 crio[836]: time="2025-12-27T10:29:11.951266516Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5ce2d6a4-f5e8-406c-a593-49c51cfa501d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:29:11 embed-certs-367691 crio[836]: time="2025-12-27T10:29:11.958352954Z" level=info msg="Creating container: default/busybox/busybox" id=aa150150-a755-4b9b-993a-854848951d1e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:29:11 embed-certs-367691 crio[836]: time="2025-12-27T10:29:11.958511939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:29:11 embed-certs-367691 crio[836]: time="2025-12-27T10:29:11.966650647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:29:11 embed-certs-367691 crio[836]: time="2025-12-27T10:29:11.967150122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:29:11 embed-certs-367691 crio[836]: time="2025-12-27T10:29:11.992993674Z" level=info msg="Created container e77f80bcfb7d2bc330da17c8fa9579ccb97698ca904963a6b9a8bfb09623a5c1: default/busybox/busybox" id=aa150150-a755-4b9b-993a-854848951d1e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:29:11 embed-certs-367691 crio[836]: time="2025-12-27T10:29:11.996451063Z" level=info msg="Starting container: e77f80bcfb7d2bc330da17c8fa9579ccb97698ca904963a6b9a8bfb09623a5c1" id=88685455-5560-4b55-a569-62d83a71b114 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:29:12 embed-certs-367691 crio[836]: time="2025-12-27T10:29:12.0013431Z" level=info msg="Started container" PID=1825 containerID=e77f80bcfb7d2bc330da17c8fa9579ccb97698ca904963a6b9a8bfb09623a5c1 description=default/busybox/busybox id=88685455-5560-4b55-a569-62d83a71b114 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b2c152877c289ba762805cc973dfad2004cfc9eaca0394e495e609c17d2428dd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	e77f80bcfb7d2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   b2c152877c289       busybox                                      default
	ea8cf32c27f7c       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      13 seconds ago      Running             coredns                   0                   c34afefdf0690       coredns-7d764666f9-t88nq                     kube-system
	f8ddb5764abf7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   2a38d231b382d       storage-provisioner                          kube-system
	cfa0b8a76ce78       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   9a2f852811e72       kindnet-8pr87                                kube-system
	7d4ba23dd033a       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      27 seconds ago      Running             kube-proxy                0                   9037c346be9f2       kube-proxy-rpjg8                             kube-system
	d0008498d5dcc       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      38 seconds ago      Running             kube-apiserver            0                   21246af0b0c95       kube-apiserver-embed-certs-367691            kube-system
	1e4929b2bcb91       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      38 seconds ago      Running             kube-controller-manager   0                   4d6fabbc21ca3       kube-controller-manager-embed-certs-367691   kube-system
	3e6b623771d02       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      38 seconds ago      Running             etcd                      0                   3754a89606a05       etcd-embed-certs-367691                      kube-system
	98320314a1e4c       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      38 seconds ago      Running             kube-scheduler            0                   13df900cb3804       kube-scheduler-embed-certs-367691            kube-system
	
	
	==> coredns [ea8cf32c27f7cff954b11f30d977f7667c99fa60e2c6a75c2ae88f25b61d0300] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50820 - 23808 "HINFO IN 8743921385639058840.7319646041365780321. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020325022s
	
	
	==> describe nodes <==
	Name:               embed-certs-367691
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-367691
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=embed-certs-367691
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_28_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:28:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-367691
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:29:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:29:18 +0000   Sat, 27 Dec 2025 10:28:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:29:18 +0000   Sat, 27 Dec 2025 10:28:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:29:18 +0000   Sat, 27 Dec 2025 10:28:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:29:18 +0000   Sat, 27 Dec 2025 10:29:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-367691
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                220a60ff-ddbf-4af6-ab3b-b3aec69cd7bb
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-t88nq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-embed-certs-367691                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-8pr87                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-367691             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-embed-certs-367691    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-rpjg8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-367691             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node embed-certs-367691 event: Registered Node embed-certs-367691 in Controller
	
	
	==> dmesg <==
	[Dec27 09:59] overlayfs: idmapped layers are currently not supported
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	[Dec27 10:28] overlayfs: idmapped layers are currently not supported
	[Dec27 10:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3e6b623771d02217d2cf5146067545c7208a0743ced7838f14965c5c8dc363ea] <==
	{"level":"info","ts":"2025-12-27T10:28:41.290229Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:28:41.364008Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T10:28:41.364127Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T10:28:41.364245Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T10:28:41.364312Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:28:41.364355Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:28:41.367759Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:28:41.367860Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:28:41.367919Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T10:28:41.367961Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:28:41.369249Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:28:41.370315Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-367691 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:28:41.370319Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:28:41.370361Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:28:41.374317Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:28:41.375175Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:28:41.377719Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:28:41.378789Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:28:41.381371Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:28:41.381499Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:28:41.381559Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:28:41.381605Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:28:41.381638Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:28:41.381718Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T10:28:41.381812Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 10:29:20 up  2:11,  0 user,  load average: 2.40, 1.88, 1.92
	Linux embed-certs-367691 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cfa0b8a76ce78b623007b4489b14717aeb8ef15afcc75600be91a22fa72ecfa7] <==
	I1227 10:28:55.332792       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:28:55.333160       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:28:55.333327       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:28:55.333373       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:28:55.333413       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:28:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:28:55.618233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:28:55.618341       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:28:55.618381       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:28:55.619410       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:28:55.818929       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:28:55.818969       1 metrics.go:72] Registering metrics
	I1227 10:28:55.819019       1 controller.go:711] "Syncing nftables rules"
	I1227 10:29:05.618384       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:29:05.618446       1 main.go:301] handling current node
	I1227 10:29:15.618365       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:29:15.618408       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d0008498d5dcc7047841e51520c0cd24e5ce0ecb185f9a445c00c3ace7e40e4f] <==
	I1227 10:28:44.291436       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:28:44.291567       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:28:44.294928       1 controller.go:667] quota admission added evaluator for: namespaces
	E1227 10:28:44.298710       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1227 10:28:44.302029       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:28:44.303596       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:28:44.470266       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:28:44.943379       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 10:28:44.949503       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 10:28:44.949527       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:28:45.929162       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:28:45.984576       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:28:46.052298       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 10:28:46.060377       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 10:28:46.061644       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:28:46.066887       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:28:46.093451       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:28:47.205619       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:28:47.234558       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 10:28:47.255451       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 10:28:51.551794       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:28:51.557204       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:28:51.798637       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:28:52.105874       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1227 10:29:17.649230       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:32994: use of closed network connection
	
	
	==> kube-controller-manager [1e4929b2bcb9158e1033d4ce6f65a2f9186c3e9e071bc06c137b8c1aad1150df] <==
	I1227 10:28:50.910205       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.910227       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.910237       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.910242       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.914325       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.914338       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.914345       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.914352       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.914367       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.914373       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.914380       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.914385       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.910094       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.910103       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.956789       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:28:50.956827       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:28:50.956841       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:28:50.956855       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.910118       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:50.974692       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-367691" podCIDRs=["10.244.0.0/24"]
	I1227 10:28:51.007436       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:51.007472       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:28:51.007479       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:28:51.026877       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:10.916186       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [7d4ba23dd033aa286120b6a7ca78f8e8309ba0142df74b0dafbe85f88cabdf07] <==
	I1227 10:28:52.635423       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:28:52.744038       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:28:52.845842       1 shared_informer.go:377] "Caches are synced"
	I1227 10:28:52.845892       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:28:52.845989       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:28:52.894930       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:28:52.894991       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:28:52.908551       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:28:52.913119       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:28:52.913136       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:28:52.914582       1 config.go:200] "Starting service config controller"
	I1227 10:28:52.914593       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:28:52.914609       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:28:52.914613       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:28:52.914623       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:28:52.914626       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:28:52.915271       1 config.go:309] "Starting node config controller"
	I1227 10:28:52.915280       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:28:52.915286       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:28:53.015711       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:28:53.015762       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:28:53.015797       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [98320314a1e4c1c2e4e78c6582704b840cc630e0adc5517c11390bb0feaeb15e] <==
	E1227 10:28:44.228714       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:28:44.228802       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:28:44.228875       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:28:44.229073       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:28:44.229182       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:28:44.229317       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:28:44.229346       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:28:44.229376       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:28:45.049453       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:28:45.113701       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:28:45.133837       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:28:45.137341       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:28:45.188478       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:28:45.204572       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:28:45.224655       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:28:45.248411       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:28:45.300455       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:28:45.320536       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:28:45.353160       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:28:45.449140       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:28:45.450844       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:28:45.492501       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:28:45.521505       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:28:45.667892       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	I1227 10:28:47.482782       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:28:52 embed-certs-367691 kubelet[1306]: I1227 10:28:52.304545    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/77655848-4bef-4fdb-af7c-7f4bf3d0309b-cni-cfg\") pod \"kindnet-8pr87\" (UID: \"77655848-4bef-4fdb-af7c-7f4bf3d0309b\") " pod="kube-system/kindnet-8pr87"
	Dec 27 10:28:52 embed-certs-367691 kubelet[1306]: I1227 10:28:52.404236    1306 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:28:52 embed-certs-367691 kubelet[1306]: W1227 10:28:52.478785    1306 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/crio-9037c346be9f2c0caf84a58c617249f1ba03efe90f3d496f6409d44c9da2ba7e WatchSource:0}: Error finding container 9037c346be9f2c0caf84a58c617249f1ba03efe90f3d496f6409d44c9da2ba7e: Status 404 returned error can't find the container with id 9037c346be9f2c0caf84a58c617249f1ba03efe90f3d496f6409d44c9da2ba7e
	Dec 27 10:28:52 embed-certs-367691 kubelet[1306]: W1227 10:28:52.553512    1306 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/crio-9a2f852811e7252dd2270b8870e52005f1694e1c260a69fffcf91f58be0d58e6 WatchSource:0}: Error finding container 9a2f852811e7252dd2270b8870e52005f1694e1c260a69fffcf91f58be0d58e6: Status 404 returned error can't find the container with id 9a2f852811e7252dd2270b8870e52005f1694e1c260a69fffcf91f58be0d58e6
	Dec 27 10:28:53 embed-certs-367691 kubelet[1306]: E1227 10:28:53.431624    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-367691" containerName="kube-scheduler"
	Dec 27 10:28:53 embed-certs-367691 kubelet[1306]: I1227 10:28:53.450963    1306 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-rpjg8" podStartSLOduration=1.450936767 podStartE2EDuration="1.450936767s" podCreationTimestamp="2025-12-27 10:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:28:53.339398014 +0000 UTC m=+6.302057383" watchObservedRunningTime="2025-12-27 10:28:53.450936767 +0000 UTC m=+6.413596111"
	Dec 27 10:28:57 embed-certs-367691 kubelet[1306]: E1227 10:28:57.506054    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-367691" containerName="kube-controller-manager"
	Dec 27 10:28:57 embed-certs-367691 kubelet[1306]: I1227 10:28:57.520458    1306 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-8pr87" podStartSLOduration=2.86122485 podStartE2EDuration="5.520440927s" podCreationTimestamp="2025-12-27 10:28:52 +0000 UTC" firstStartedPulling="2025-12-27 10:28:52.559007384 +0000 UTC m=+5.521666728" lastFinishedPulling="2025-12-27 10:28:55.218223461 +0000 UTC m=+8.180882805" observedRunningTime="2025-12-27 10:28:55.336075304 +0000 UTC m=+8.298734656" watchObservedRunningTime="2025-12-27 10:28:57.520440927 +0000 UTC m=+10.483100270"
	Dec 27 10:29:00 embed-certs-367691 kubelet[1306]: E1227 10:29:00.389533    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-367691" containerName="kube-apiserver"
	Dec 27 10:29:01 embed-certs-367691 kubelet[1306]: E1227 10:29:01.464495    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-367691" containerName="etcd"
	Dec 27 10:29:03 embed-certs-367691 kubelet[1306]: E1227 10:29:03.437562    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-367691" containerName="kube-scheduler"
	Dec 27 10:29:05 embed-certs-367691 kubelet[1306]: I1227 10:29:05.940713    1306 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 10:29:06 embed-certs-367691 kubelet[1306]: I1227 10:29:06.040063    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf-tmp\") pod \"storage-provisioner\" (UID: \"554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf\") " pod="kube-system/storage-provisioner"
	Dec 27 10:29:06 embed-certs-367691 kubelet[1306]: I1227 10:29:06.040124    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzk5h\" (UniqueName: \"kubernetes.io/projected/554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf-kube-api-access-fzk5h\") pod \"storage-provisioner\" (UID: \"554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf\") " pod="kube-system/storage-provisioner"
	Dec 27 10:29:06 embed-certs-367691 kubelet[1306]: I1227 10:29:06.040153    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6209d048-3dca-4ad1-849b-159b1b571154-config-volume\") pod \"coredns-7d764666f9-t88nq\" (UID: \"6209d048-3dca-4ad1-849b-159b1b571154\") " pod="kube-system/coredns-7d764666f9-t88nq"
	Dec 27 10:29:06 embed-certs-367691 kubelet[1306]: I1227 10:29:06.040175    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-675cq\" (UniqueName: \"kubernetes.io/projected/6209d048-3dca-4ad1-849b-159b1b571154-kube-api-access-675cq\") pod \"coredns-7d764666f9-t88nq\" (UID: \"6209d048-3dca-4ad1-849b-159b1b571154\") " pod="kube-system/coredns-7d764666f9-t88nq"
	Dec 27 10:29:07 embed-certs-367691 kubelet[1306]: E1227 10:29:07.378026    1306 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-t88nq" containerName="coredns"
	Dec 27 10:29:07 embed-certs-367691 kubelet[1306]: I1227 10:29:07.406487    1306 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.406469424 podStartE2EDuration="14.406469424s" podCreationTimestamp="2025-12-27 10:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:29:07.385970164 +0000 UTC m=+20.348629516" watchObservedRunningTime="2025-12-27 10:29:07.406469424 +0000 UTC m=+20.369128776"
	Dec 27 10:29:07 embed-certs-367691 kubelet[1306]: I1227 10:29:07.454280    1306 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-t88nq" podStartSLOduration=15.454262876 podStartE2EDuration="15.454262876s" podCreationTimestamp="2025-12-27 10:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:29:07.412367133 +0000 UTC m=+20.375026485" watchObservedRunningTime="2025-12-27 10:29:07.454262876 +0000 UTC m=+20.416922220"
	Dec 27 10:29:07 embed-certs-367691 kubelet[1306]: E1227 10:29:07.517468    1306 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-367691" containerName="kube-controller-manager"
	Dec 27 10:29:08 embed-certs-367691 kubelet[1306]: E1227 10:29:08.380619    1306 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-t88nq" containerName="coredns"
	Dec 27 10:29:09 embed-certs-367691 kubelet[1306]: E1227 10:29:09.383064    1306 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-t88nq" containerName="coredns"
	Dec 27 10:29:09 embed-certs-367691 kubelet[1306]: I1227 10:29:09.595996    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtkvg\" (UniqueName: \"kubernetes.io/projected/dc858700-a966-41a6-8e94-31faef3ddea6-kube-api-access-gtkvg\") pod \"busybox\" (UID: \"dc858700-a966-41a6-8e94-31faef3ddea6\") " pod="default/busybox"
	Dec 27 10:29:09 embed-certs-367691 kubelet[1306]: W1227 10:29:09.789597    1306 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/crio-b2c152877c289ba762805cc973dfad2004cfc9eaca0394e495e609c17d2428dd WatchSource:0}: Error finding container b2c152877c289ba762805cc973dfad2004cfc9eaca0394e495e609c17d2428dd: Status 404 returned error can't find the container with id b2c152877c289ba762805cc973dfad2004cfc9eaca0394e495e609c17d2428dd
	Dec 27 10:29:17 embed-certs-367691 kubelet[1306]: E1227 10:29:17.649707    1306 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43222->127.0.0.1:36645: write tcp 127.0.0.1:43222->127.0.0.1:36645: write: broken pipe
	
	
	==> storage-provisioner [f8ddb5764abf731e88b3e82a985ff55da3c42ee080294d5ca428d5a53533a29e] <==
	I1227 10:29:06.442025       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:29:06.511410       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:29:06.511624       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:29:06.515145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:06.528894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:29:06.529127       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:29:06.529637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41e16e59-3f7f-429b-a593-eb5c08bee361", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-367691_14e87a8e-9a2c-4b34-9f3a-1269b5ea8968 became leader
	I1227 10:29:06.532188       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-367691_14e87a8e-9a2c-4b34-9f3a-1269b5ea8968!
	W1227 10:29:06.574290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:06.577692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:29:06.632953       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-367691_14e87a8e-9a2c-4b34-9f3a-1269b5ea8968!
	W1227 10:29:08.582625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:08.593105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:10.597000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:10.602475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:12.605520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:12.610912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:14.613775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:14.618632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:16.623180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:16.629397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:18.650560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:18.666309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-367691 -n embed-certs-367691
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-367691 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-241090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-241090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (304.329979ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:30:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-241090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-241090 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-241090 describe deploy/metrics-server -n kube-system: exit status 1 (99.247341ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-241090 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-241090
helpers_test.go:244: (dbg) docker inspect no-preload-241090:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a",
	        "Created": "2025-12-27T10:28:59.433064249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505637,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:28:59.504878604Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/hosts",
	        "LogPath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a-json.log",
	        "Name": "/no-preload-241090",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-241090:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-241090",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a",
	                "LowerDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-241090",
	                "Source": "/var/lib/docker/volumes/no-preload-241090/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-241090",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-241090",
	                "name.minikube.sigs.k8s.io": "no-preload-241090",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "81bb859d8694bd92a2fd526668f9f30b30a8d6460ab3966b1c816c9347d1a374",
	            "SandboxKey": "/var/run/docker/netns/81bb859d8694",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-241090": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:92:a4:0d:90:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8d3a00ff7095640f7433799c8c32b498081b342b7c8dd02f4d6cb45f97d8125",
	                    "EndpointID": "524dd1c1940b07c3ad6c2ce54991b7841a7cb1c9cf33db75df07b3a2eba69042",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-241090",
	                        "f3d580a4684b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241090 -n no-preload-241090
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241090 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-241090 logs -n 25: (1.336443652s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-482317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:24 UTC │
	│ start   │ -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:24 UTC │ 27 Dec 25 10:25 UTC │
	│ image   │ old-k8s-version-482317 image list --format=json                                                                                                                                                                                               │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │ 27 Dec 25 10:25 UTC │
	│ pause   │ -p old-k8s-version-482317 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:25 UTC │                     │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                                                                                     │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-784377 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-784377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ image   │ default-k8s-diff-port-784377 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ pause   │ -p default-k8s-diff-port-784377 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ ssh     │ force-systemd-flag-915850 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p force-systemd-flag-915850                                                                                                                                                                                                                  │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p disable-driver-mounts-913868                                                                                                                                                                                                               │ disable-driver-mounts-913868 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable metrics-server -p embed-certs-367691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │                     │
	│ stop    │ -p embed-certs-367691 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable dashboard -p embed-certs-367691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-241090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:29:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:29:33.858784  508852 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:29:33.858939  508852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:29:33.858950  508852 out.go:374] Setting ErrFile to fd 2...
	I1227 10:29:33.858955  508852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:29:33.859213  508852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:29:33.859600  508852 out.go:368] Setting JSON to false
	I1227 10:29:33.860605  508852 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7927,"bootTime":1766823447,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:29:33.860682  508852 start.go:143] virtualization:  
	I1227 10:29:33.864135  508852 out.go:179] * [embed-certs-367691] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:29:33.868303  508852 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:29:33.868363  508852 notify.go:221] Checking for updates...
	I1227 10:29:33.874709  508852 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:29:33.877671  508852 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:29:33.880633  508852 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:29:33.883643  508852 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:29:33.886729  508852 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:29:33.891250  508852 config.go:182] Loaded profile config "embed-certs-367691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:29:33.891849  508852 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:29:33.916212  508852 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:29:33.916324  508852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:29:34.017491  508852 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-12-27 10:29:33.997718645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:29:34.017604  508852 docker.go:319] overlay module found
	I1227 10:29:34.021382  508852 out.go:179] * Using the docker driver based on existing profile
	I1227 10:29:34.025121  508852 start.go:309] selected driver: docker
	I1227 10:29:34.025163  508852 start.go:928] validating driver "docker" against &{Name:embed-certs-367691 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-367691 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:29:34.025295  508852 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:29:34.026039  508852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:29:34.137743  508852 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-12-27 10:29:34.126960127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:29:34.138068  508852 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:29:34.138088  508852 cni.go:84] Creating CNI manager for ""
	I1227 10:29:34.138146  508852 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:29:34.138182  508852 start.go:353] cluster config:
	{Name:embed-certs-367691 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-367691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:29:34.141478  508852 out.go:179] * Starting "embed-certs-367691" primary control-plane node in "embed-certs-367691" cluster
	I1227 10:29:34.145127  508852 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:29:34.148717  508852 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:29:34.151547  508852 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:29:34.151617  508852 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:29:34.151632  508852 cache.go:65] Caching tarball of preloaded images
	I1227 10:29:34.151645  508852 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:29:34.151747  508852 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:29:34.151759  508852 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:29:34.151883  508852 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/config.json ...
	I1227 10:29:34.179486  508852 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:29:34.179514  508852 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:29:34.179535  508852 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:29:34.179569  508852 start.go:360] acquireMachinesLock for embed-certs-367691: {Name:mkb83b0668d0dafda9600ffbecce26be02e61e8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:29:34.179643  508852 start.go:364] duration metric: took 49.478µs to acquireMachinesLock for "embed-certs-367691"
	I1227 10:29:34.179667  508852 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:29:34.179676  508852 fix.go:54] fixHost starting: 
	I1227 10:29:34.179952  508852 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:29:34.197942  508852 fix.go:112] recreateIfNeeded on embed-certs-367691: state=Stopped err=<nil>
	W1227 10:29:34.197973  508852 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 10:29:33.503181  505333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:29:34.005217  505333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:29:34.504100  505333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:29:35.003086  505333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:29:35.504035  505333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:29:36.003590  505333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:29:36.503610  505333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:29:37.004009  505333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:29:37.102419  505333 kubeadm.go:1114] duration metric: took 4.313858828s to wait for elevateKubeSystemPrivileges
	I1227 10:29:37.102455  505333 kubeadm.go:403] duration metric: took 16.448794962s to StartCluster
	I1227 10:29:37.102473  505333 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:29:37.102538  505333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:29:37.103174  505333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:29:37.103393  505333 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:29:37.103528  505333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:29:37.103753  505333 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:29:37.103830  505333 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:29:37.103839  505333 addons.go:70] Setting storage-provisioner=true in profile "no-preload-241090"
	I1227 10:29:37.103856  505333 addons.go:239] Setting addon storage-provisioner=true in "no-preload-241090"
	I1227 10:29:37.103869  505333 addons.go:70] Setting default-storageclass=true in profile "no-preload-241090"
	I1227 10:29:37.103881  505333 host.go:66] Checking if "no-preload-241090" exists ...
	I1227 10:29:37.103887  505333 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-241090"
	I1227 10:29:37.104223  505333 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:29:37.104409  505333 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:29:37.106656  505333 out.go:179] * Verifying Kubernetes components...
	I1227 10:29:37.109658  505333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:29:37.156675  505333 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:29:37.160216  505333 addons.go:239] Setting addon default-storageclass=true in "no-preload-241090"
	I1227 10:29:37.160261  505333 host.go:66] Checking if "no-preload-241090" exists ...
	I1227 10:29:37.160712  505333 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:29:37.160982  505333 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:29:37.160997  505333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:29:37.161042  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:37.193211  505333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:29:37.198016  505333 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:29:37.198040  505333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:29:37.198110  505333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:29:37.232079  505333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:29:37.429007  505333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:29:37.443996  505333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:29:37.444412  505333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:29:37.481831  505333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:29:34.200724  508852 out.go:252] * Restarting existing docker container for "embed-certs-367691" ...
	I1227 10:29:34.200841  508852 cli_runner.go:164] Run: docker start embed-certs-367691
	I1227 10:29:34.476483  508852 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:29:34.500790  508852 kic.go:430] container "embed-certs-367691" state is running.
	I1227 10:29:34.501190  508852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-367691
	I1227 10:29:34.526811  508852 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/config.json ...
	I1227 10:29:34.527071  508852 machine.go:94] provisionDockerMachine start ...
	I1227 10:29:34.527149  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:34.554441  508852 main.go:144] libmachine: Using SSH client type: native
	I1227 10:29:34.554774  508852 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1227 10:29:34.554789  508852 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:29:34.558732  508852 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:29:37.728443  508852 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-367691
	
	I1227 10:29:37.728475  508852 ubuntu.go:182] provisioning hostname "embed-certs-367691"
	I1227 10:29:37.728592  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:37.765710  508852 main.go:144] libmachine: Using SSH client type: native
	I1227 10:29:37.766038  508852 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1227 10:29:37.766056  508852 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-367691 && echo "embed-certs-367691" | sudo tee /etc/hostname
	I1227 10:29:37.959399  508852 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-367691
	
	I1227 10:29:37.959571  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:37.992366  508852 main.go:144] libmachine: Using SSH client type: native
	I1227 10:29:37.992676  508852 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1227 10:29:37.992693  508852 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-367691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-367691/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-367691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:29:38.181466  508852 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:29:38.181493  508852 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:29:38.181534  508852 ubuntu.go:190] setting up certificates
	I1227 10:29:38.181548  508852 provision.go:84] configureAuth start
	I1227 10:29:38.181612  508852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-367691
	I1227 10:29:38.214539  508852 provision.go:143] copyHostCerts
	I1227 10:29:38.214610  508852 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:29:38.214631  508852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:29:38.214710  508852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:29:38.214818  508852 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:29:38.214829  508852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:29:38.214857  508852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:29:38.214932  508852 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:29:38.214942  508852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:29:38.214968  508852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:29:38.215028  508852 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.embed-certs-367691 san=[127.0.0.1 192.168.76.2 embed-certs-367691 localhost minikube]
	I1227 10:29:38.632345  508852 provision.go:177] copyRemoteCerts
	I1227 10:29:38.632470  508852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:29:38.632533  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:38.659106  508852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:29:38.773933  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1227 10:29:38.806745  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:29:38.841618  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:29:38.922357  505333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.493312288s)
	I1227 10:29:38.922425  505333 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.478402492s)
	I1227 10:29:38.922436  505333 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1227 10:29:38.923499  505333 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.479063619s)
	I1227 10:29:38.924186  505333 node_ready.go:35] waiting up to 6m0s for node "no-preload-241090" to be "Ready" ...
	I1227 10:29:38.924444  505333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.442586843s)
	I1227 10:29:39.005010  505333 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 10:29:38.873734  508852 provision.go:87] duration metric: took 692.155501ms to configureAuth
	I1227 10:29:38.873765  508852 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:29:38.874025  508852 config.go:182] Loaded profile config "embed-certs-367691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:29:38.874171  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:38.901638  508852 main.go:144] libmachine: Using SSH client type: native
	I1227 10:29:38.901960  508852 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1227 10:29:38.901981  508852 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:29:39.406330  508852 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:29:39.406354  508852 machine.go:97] duration metric: took 4.879267358s to provisionDockerMachine
	I1227 10:29:39.406366  508852 start.go:293] postStartSetup for "embed-certs-367691" (driver="docker")
	I1227 10:29:39.406377  508852 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:29:39.406438  508852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:29:39.406485  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:39.432448  508852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:29:39.544247  508852 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:29:39.548550  508852 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:29:39.548583  508852 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:29:39.548595  508852 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:29:39.548655  508852 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:29:39.548738  508852 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:29:39.548853  508852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:29:39.562352  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:29:39.609430  508852 start.go:296] duration metric: took 203.04854ms for postStartSetup
	I1227 10:29:39.609544  508852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:29:39.609633  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:39.640392  508852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:29:39.753573  508852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:29:39.759588  508852 fix.go:56] duration metric: took 5.579904627s for fixHost
	I1227 10:29:39.759616  508852 start.go:83] releasing machines lock for "embed-certs-367691", held for 5.579961079s
	I1227 10:29:39.759682  508852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-367691
	I1227 10:29:39.778996  508852 ssh_runner.go:195] Run: cat /version.json
	I1227 10:29:39.779026  508852 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:29:39.779055  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:39.779096  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:39.813573  508852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:29:39.821792  508852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:29:39.947904  508852 ssh_runner.go:195] Run: systemctl --version
	I1227 10:29:40.058763  508852 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:29:40.126116  508852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:29:40.134913  508852 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:29:40.135044  508852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:29:40.146423  508852 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:29:40.146500  508852 start.go:496] detecting cgroup driver to use...
	I1227 10:29:40.146549  508852 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:29:40.146616  508852 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:29:40.164930  508852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:29:40.180689  508852 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:29:40.180813  508852 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:29:40.198542  508852 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:29:40.214092  508852 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:29:40.390410  508852 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:29:40.544738  508852 docker.go:234] disabling docker service ...
	I1227 10:29:40.544868  508852 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:29:40.562457  508852 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:29:40.576901  508852 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:29:40.732495  508852 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:29:40.891304  508852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:29:40.914037  508852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:29:40.934941  508852 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:29:40.935094  508852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:40.950788  508852 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:29:40.950957  508852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:40.961341  508852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:40.972873  508852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:40.982626  508852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:29:40.990691  508852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:41.001757  508852 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:41.013605  508852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:29:41.022814  508852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:29:41.031698  508852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:29:41.042910  508852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:29:41.194618  508852 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:29:41.656111  508852 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:29:41.656195  508852 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:29:41.660365  508852 start.go:574] Will wait 60s for crictl version
	I1227 10:29:41.660430  508852 ssh_runner.go:195] Run: which crictl
	I1227 10:29:41.663923  508852 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:29:41.689366  508852 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:29:41.689458  508852 ssh_runner.go:195] Run: crio --version
	I1227 10:29:41.724238  508852 ssh_runner.go:195] Run: crio --version
	I1227 10:29:41.771682  508852 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:29:41.775513  508852 cli_runner.go:164] Run: docker network inspect embed-certs-367691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:29:41.807241  508852 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:29:41.811084  508852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:29:41.823469  508852 kubeadm.go:884] updating cluster {Name:embed-certs-367691 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-367691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:29:41.823602  508852 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:29:41.823662  508852 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:29:41.869076  508852 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:29:41.869103  508852 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:29:41.869160  508852 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:29:41.895803  508852 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:29:41.895829  508852 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:29:41.895837  508852 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:29:41.895944  508852 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-367691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-367691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:29:41.896058  508852 ssh_runner.go:195] Run: crio config
	I1227 10:29:41.966937  508852 cni.go:84] Creating CNI manager for ""
	I1227 10:29:41.967010  508852 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:29:41.967046  508852 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:29:41.967106  508852 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-367691 NodeName:embed-certs-367691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:29:41.967292  508852 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-367691"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:29:41.967412  508852 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:29:41.977566  508852 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:29:41.977706  508852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:29:41.992938  508852 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1227 10:29:42.010130  508852 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:29:42.030639  508852 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1227 10:29:42.048104  508852 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:29:42.053098  508852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:29:42.070207  508852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:29:42.229597  508852 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:29:42.252566  508852 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691 for IP: 192.168.76.2
	I1227 10:29:42.252601  508852 certs.go:195] generating shared ca certs ...
	I1227 10:29:42.252621  508852 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:29:42.252852  508852 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:29:42.252947  508852 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:29:42.252992  508852 certs.go:257] generating profile certs ...
	I1227 10:29:42.253238  508852 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/client.key
	I1227 10:29:42.253369  508852 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.key.b2a82a80
	I1227 10:29:42.253458  508852 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.key
	I1227 10:29:42.253630  508852 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:29:42.253697  508852 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:29:42.253711  508852 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:29:42.253765  508852 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:29:42.253838  508852 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:29:42.253886  508852 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:29:42.253964  508852 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:29:42.254901  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:29:42.277993  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:29:42.303333  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:29:42.328579  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:29:42.357308  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 10:29:42.382742  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:29:42.408392  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:29:42.436522  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/embed-certs-367691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:29:42.460668  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:29:42.487656  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:29:42.508783  508852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:29:42.528632  508852 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:29:42.543953  508852 ssh_runner.go:195] Run: openssl version
	I1227 10:29:42.550847  508852 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:29:42.560112  508852 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:29:42.568751  508852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:29:42.572613  508852 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:29:42.572712  508852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:29:42.616208  508852 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:29:42.623672  508852 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:29:42.631191  508852 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:29:42.638989  508852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:29:42.643175  508852 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:29:42.643297  508852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:29:42.696109  508852 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:29:42.705208  508852 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:29:42.712520  508852 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:29:42.720487  508852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:29:42.724082  508852 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:29:42.724222  508852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:29:42.773741  508852 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:29:42.781384  508852 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:29:42.785265  508852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:29:42.826153  508852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:29:42.868782  508852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:29:42.912399  508852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:29:42.986385  508852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:29:43.067496  508852 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:29:43.168513  508852 kubeadm.go:401] StartCluster: {Name:embed-certs-367691 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-367691 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:29:43.168614  508852 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:29:43.168712  508852 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:29:43.217404  508852 cri.go:96] found id: "7c5d69297d8771e299ca8b09a7cae96c2e7c5f87879fd1c567112742214e35f3"
	I1227 10:29:43.217427  508852 cri.go:96] found id: "56a9c2fee9e20cf978b01d8726c038b4c28e466158c9a28f5f3fdc75e851a27d"
	I1227 10:29:43.217432  508852 cri.go:96] found id: "a108df6f898b109467fab72294c0412641c1e5d2b2ea82f9edf2b1b962883dcf"
	I1227 10:29:43.217437  508852 cri.go:96] found id: "8c4fb8d9010ff30eec94b0fbcdd2a5948b473223b17ca2f2a4b0ce18bedff071"
	I1227 10:29:43.217448  508852 cri.go:96] found id: ""
	I1227 10:29:43.217522  508852 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:29:43.232593  508852 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:29:43Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:29:43.232713  508852 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:29:43.246180  508852 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:29:43.246201  508852 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:29:43.246277  508852 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:29:43.256527  508852 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:29:43.257121  508852 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-367691" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:29:43.257422  508852 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-367691" cluster setting kubeconfig missing "embed-certs-367691" context setting]
	I1227 10:29:43.257885  508852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:29:43.259558  508852 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:29:43.269508  508852 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1227 10:29:43.269542  508852 kubeadm.go:602] duration metric: took 23.334563ms to restartPrimaryControlPlane
	I1227 10:29:43.269552  508852 kubeadm.go:403] duration metric: took 101.050107ms to StartCluster
	I1227 10:29:43.269587  508852 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:29:43.269678  508852 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:29:43.270958  508852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:29:43.271242  508852 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:29:43.271788  508852 config.go:182] Loaded profile config "embed-certs-367691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:29:43.271781  508852 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:29:43.271921  508852 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-367691"
	I1227 10:29:43.271933  508852 addons.go:70] Setting dashboard=true in profile "embed-certs-367691"
	I1227 10:29:43.271944  508852 addons.go:70] Setting default-storageclass=true in profile "embed-certs-367691"
	I1227 10:29:43.271949  508852 addons.go:239] Setting addon dashboard=true in "embed-certs-367691"
	W1227 10:29:43.271955  508852 addons.go:248] addon dashboard should already be in state true
	I1227 10:29:43.271959  508852 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-367691"
	I1227 10:29:43.272002  508852 host.go:66] Checking if "embed-certs-367691" exists ...
	I1227 10:29:43.272288  508852 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:29:43.272661  508852 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:29:43.271937  508852 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-367691"
	W1227 10:29:43.272917  508852 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:29:43.272942  508852 host.go:66] Checking if "embed-certs-367691" exists ...
	I1227 10:29:43.273364  508852 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:29:43.275853  508852 out.go:179] * Verifying Kubernetes components...
	I1227 10:29:43.287420  508852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:29:43.324750  508852 addons.go:239] Setting addon default-storageclass=true in "embed-certs-367691"
	W1227 10:29:43.324776  508852 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:29:43.324802  508852 host.go:66] Checking if "embed-certs-367691" exists ...
	I1227 10:29:43.325239  508852 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:29:43.350155  508852 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:29:43.358118  508852 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:29:43.362107  508852 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:29:39.007892  505333 addons.go:530] duration metric: took 1.904129045s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 10:29:39.425839  505333 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-241090" context rescaled to 1 replicas
	W1227 10:29:40.930789  505333 node_ready.go:57] node "no-preload-241090" has "Ready":"False" status (will retry)
	I1227 10:29:43.362189  508852 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:29:43.362206  508852 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:29:43.362275  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:43.365078  508852 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:29:43.365102  508852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:29:43.365166  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:43.396203  508852 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:29:43.396227  508852 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:29:43.396287  508852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:29:43.436592  508852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:29:43.440098  508852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:29:43.460510  508852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:29:43.632361  508852 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:29:43.677200  508852 node_ready.go:35] waiting up to 6m0s for node "embed-certs-367691" to be "Ready" ...
	I1227 10:29:43.701698  508852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:29:43.713609  508852 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:29:43.713682  508852 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:29:43.727324  508852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:29:43.756607  508852 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:29:43.756686  508852 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:29:43.819115  508852 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:29:43.819188  508852 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:29:43.895748  508852 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:29:43.895822  508852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:29:43.959479  508852 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:29:43.959556  508852 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:29:43.993494  508852 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:29:43.993571  508852 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:29:44.013332  508852 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:29:44.013411  508852 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:29:44.031560  508852 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:29:44.031638  508852 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:29:44.051950  508852 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:29:44.052055  508852 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:29:44.068377  508852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:29:46.661210  508852 node_ready.go:49] node "embed-certs-367691" is "Ready"
	I1227 10:29:46.661251  508852 node_ready.go:38] duration metric: took 2.983958465s for node "embed-certs-367691" to be "Ready" ...
	I1227 10:29:46.661269  508852 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:29:46.661358  508852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:29:47.753268  508852 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.051472029s)
	I1227 10:29:47.753358  508852 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.025967254s)
	I1227 10:29:48.107118  508852 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.038639794s)
	I1227 10:29:48.107414  508852 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.446043079s)
	I1227 10:29:48.107482  508852 api_server.go:72] duration metric: took 4.836205119s to wait for apiserver process to appear ...
	I1227 10:29:48.107503  508852 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:29:48.107547  508852 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:29:48.112271  508852 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-367691 addons enable metrics-server
	
	I1227 10:29:48.115388  508852 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1227 10:29:43.436384  505333 node_ready.go:57] node "no-preload-241090" has "Ready":"False" status (will retry)
	W1227 10:29:45.927026  505333 node_ready.go:57] node "no-preload-241090" has "Ready":"False" status (will retry)
	W1227 10:29:47.927748  505333 node_ready.go:57] node "no-preload-241090" has "Ready":"False" status (will retry)
	I1227 10:29:48.118470  508852 addons.go:530] duration metric: took 4.846691486s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1227 10:29:48.125705  508852 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 10:29:48.125737  508852 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 10:29:48.608116  508852 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:29:48.629409  508852 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:29:48.630535  508852 api_server.go:141] control plane version: v1.35.0
	I1227 10:29:48.630601  508852 api_server.go:131] duration metric: took 523.077138ms to wait for apiserver health ...
	I1227 10:29:48.630626  508852 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:29:48.638493  508852 system_pods.go:59] 8 kube-system pods found
	I1227 10:29:48.638579  508852 system_pods.go:61] "coredns-7d764666f9-t88nq" [6209d048-3dca-4ad1-849b-159b1b571154] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:48.638605  508852 system_pods.go:61] "etcd-embed-certs-367691" [05f20b28-c7d2-4dbb-a7b0-967ce049635e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:29:48.638640  508852 system_pods.go:61] "kindnet-8pr87" [77655848-4bef-4fdb-af7c-7f4bf3d0309b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:29:48.638668  508852 system_pods.go:61] "kube-apiserver-embed-certs-367691" [c535e1b3-fee8-4461-8b61-233aaa8495d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:29:48.638697  508852 system_pods.go:61] "kube-controller-manager-embed-certs-367691" [e1b8075e-1716-4ea6-88f9-462b0aff4cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:29:48.638733  508852 system_pods.go:61] "kube-proxy-rpjg8" [a721f14e-75c6-4caf-91f8-0e5d13c01982] Running
	I1227 10:29:48.638761  508852 system_pods.go:61] "kube-scheduler-embed-certs-367691" [40e24817-0b4b-4d8f-b43b-7fbd4a5f42fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:29:48.638784  508852 system_pods.go:61] "storage-provisioner" [554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf] Running
	I1227 10:29:48.638816  508852 system_pods.go:74] duration metric: took 8.169633ms to wait for pod list to return data ...
	I1227 10:29:48.638839  508852 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:29:48.648179  508852 default_sa.go:45] found service account: "default"
	I1227 10:29:48.648253  508852 default_sa.go:55] duration metric: took 9.393638ms for default service account to be created ...
	I1227 10:29:48.648280  508852 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:29:48.675059  508852 system_pods.go:86] 8 kube-system pods found
	I1227 10:29:48.675147  508852 system_pods.go:89] "coredns-7d764666f9-t88nq" [6209d048-3dca-4ad1-849b-159b1b571154] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:48.675174  508852 system_pods.go:89] "etcd-embed-certs-367691" [05f20b28-c7d2-4dbb-a7b0-967ce049635e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:29:48.675203  508852 system_pods.go:89] "kindnet-8pr87" [77655848-4bef-4fdb-af7c-7f4bf3d0309b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:29:48.675234  508852 system_pods.go:89] "kube-apiserver-embed-certs-367691" [c535e1b3-fee8-4461-8b61-233aaa8495d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:29:48.675258  508852 system_pods.go:89] "kube-controller-manager-embed-certs-367691" [e1b8075e-1716-4ea6-88f9-462b0aff4cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:29:48.675283  508852 system_pods.go:89] "kube-proxy-rpjg8" [a721f14e-75c6-4caf-91f8-0e5d13c01982] Running
	I1227 10:29:48.675310  508852 system_pods.go:89] "kube-scheduler-embed-certs-367691" [40e24817-0b4b-4d8f-b43b-7fbd4a5f42fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:29:48.675333  508852 system_pods.go:89] "storage-provisioner" [554a92c5-1cb0-469e-a9aa-3ee8d0d91cdf] Running
	I1227 10:29:48.675367  508852 system_pods.go:126] duration metric: took 27.065587ms to wait for k8s-apps to be running ...
	I1227 10:29:48.675390  508852 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:29:48.675460  508852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:29:48.695419  508852 system_svc.go:56] duration metric: took 20.0191ms WaitForService to wait for kubelet
	I1227 10:29:48.695490  508852 kubeadm.go:587] duration metric: took 5.424211522s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:29:48.695524  508852 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:29:48.703957  508852 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:29:48.704126  508852 node_conditions.go:123] node cpu capacity is 2
	I1227 10:29:48.704154  508852 node_conditions.go:105] duration metric: took 8.611156ms to run NodePressure ...
	I1227 10:29:48.704182  508852 start.go:242] waiting for startup goroutines ...
	I1227 10:29:48.704204  508852 start.go:247] waiting for cluster config update ...
	I1227 10:29:48.704228  508852 start.go:256] writing updated cluster config ...
	I1227 10:29:48.704513  508852 ssh_runner.go:195] Run: rm -f paused
	I1227 10:29:48.709015  508852 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:29:48.773170  508852 pod_ready.go:83] waiting for pod "coredns-7d764666f9-t88nq" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:29:50.427274  505333 node_ready.go:57] node "no-preload-241090" has "Ready":"False" status (will retry)
	I1227 10:29:52.430769  505333 node_ready.go:49] node "no-preload-241090" is "Ready"
	I1227 10:29:52.430797  505333 node_ready.go:38] duration metric: took 13.506581743s for node "no-preload-241090" to be "Ready" ...
	I1227 10:29:52.430811  505333 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:29:52.430870  505333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:29:52.446951  505333 api_server.go:72] duration metric: took 15.343516763s to wait for apiserver process to appear ...
	I1227 10:29:52.446975  505333 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:29:52.446995  505333 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 10:29:52.456841  505333 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 10:29:52.458197  505333 api_server.go:141] control plane version: v1.35.0
	I1227 10:29:52.458218  505333 api_server.go:131] duration metric: took 11.236668ms to wait for apiserver health ...
	I1227 10:29:52.458227  505333 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:29:52.464905  505333 system_pods.go:59] 8 kube-system pods found
	I1227 10:29:52.464939  505333 system_pods.go:61] "coredns-7d764666f9-5p545" [0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:52.464946  505333 system_pods.go:61] "etcd-no-preload-241090" [835a968b-a507-4885-a74d-434ece70fa72] Running
	I1227 10:29:52.464953  505333 system_pods.go:61] "kindnet-jh987" [6cbce1aa-237d-42fa-bc32-dde8b72f3668] Running
	I1227 10:29:52.464959  505333 system_pods.go:61] "kube-apiserver-no-preload-241090" [e5c18f64-1c76-496a-8dd0-b5cbcfffefb5] Running
	I1227 10:29:52.464963  505333 system_pods.go:61] "kube-controller-manager-no-preload-241090" [12e95943-625c-4a69-aeff-d4364483de48] Running
	I1227 10:29:52.464967  505333 system_pods.go:61] "kube-proxy-8xv88" [ffe92c3b-92ca-41f8-91a8-2c0983689068] Running
	I1227 10:29:52.464971  505333 system_pods.go:61] "kube-scheduler-no-preload-241090" [55ff2824-5114-426e-a833-df3be58eee18] Running
	I1227 10:29:52.464977  505333 system_pods.go:61] "storage-provisioner" [4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:29:52.464983  505333 system_pods.go:74] duration metric: took 6.750559ms to wait for pod list to return data ...
	I1227 10:29:52.464991  505333 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:29:52.468670  505333 default_sa.go:45] found service account: "default"
	I1227 10:29:52.468741  505333 default_sa.go:55] duration metric: took 3.734167ms for default service account to be created ...
	I1227 10:29:52.468764  505333 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:29:52.473642  505333 system_pods.go:86] 8 kube-system pods found
	I1227 10:29:52.473672  505333 system_pods.go:89] "coredns-7d764666f9-5p545" [0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:52.473679  505333 system_pods.go:89] "etcd-no-preload-241090" [835a968b-a507-4885-a74d-434ece70fa72] Running
	I1227 10:29:52.473686  505333 system_pods.go:89] "kindnet-jh987" [6cbce1aa-237d-42fa-bc32-dde8b72f3668] Running
	I1227 10:29:52.473690  505333 system_pods.go:89] "kube-apiserver-no-preload-241090" [e5c18f64-1c76-496a-8dd0-b5cbcfffefb5] Running
	I1227 10:29:52.473695  505333 system_pods.go:89] "kube-controller-manager-no-preload-241090" [12e95943-625c-4a69-aeff-d4364483de48] Running
	I1227 10:29:52.473700  505333 system_pods.go:89] "kube-proxy-8xv88" [ffe92c3b-92ca-41f8-91a8-2c0983689068] Running
	I1227 10:29:52.473704  505333 system_pods.go:89] "kube-scheduler-no-preload-241090" [55ff2824-5114-426e-a833-df3be58eee18] Running
	I1227 10:29:52.473713  505333 system_pods.go:89] "storage-provisioner" [4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:29:52.473739  505333 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1227 10:29:52.747176  505333 system_pods.go:86] 8 kube-system pods found
	I1227 10:29:52.747210  505333 system_pods.go:89] "coredns-7d764666f9-5p545" [0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:52.747218  505333 system_pods.go:89] "etcd-no-preload-241090" [835a968b-a507-4885-a74d-434ece70fa72] Running
	I1227 10:29:52.747224  505333 system_pods.go:89] "kindnet-jh987" [6cbce1aa-237d-42fa-bc32-dde8b72f3668] Running
	I1227 10:29:52.747230  505333 system_pods.go:89] "kube-apiserver-no-preload-241090" [e5c18f64-1c76-496a-8dd0-b5cbcfffefb5] Running
	I1227 10:29:52.747235  505333 system_pods.go:89] "kube-controller-manager-no-preload-241090" [12e95943-625c-4a69-aeff-d4364483de48] Running
	I1227 10:29:52.747240  505333 system_pods.go:89] "kube-proxy-8xv88" [ffe92c3b-92ca-41f8-91a8-2c0983689068] Running
	I1227 10:29:52.747245  505333 system_pods.go:89] "kube-scheduler-no-preload-241090" [55ff2824-5114-426e-a833-df3be58eee18] Running
	I1227 10:29:52.747251  505333 system_pods.go:89] "storage-provisioner" [4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:29:53.125762  505333 system_pods.go:86] 8 kube-system pods found
	I1227 10:29:53.125805  505333 system_pods.go:89] "coredns-7d764666f9-5p545" [0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:29:53.125813  505333 system_pods.go:89] "etcd-no-preload-241090" [835a968b-a507-4885-a74d-434ece70fa72] Running
	I1227 10:29:53.125819  505333 system_pods.go:89] "kindnet-jh987" [6cbce1aa-237d-42fa-bc32-dde8b72f3668] Running
	I1227 10:29:53.125824  505333 system_pods.go:89] "kube-apiserver-no-preload-241090" [e5c18f64-1c76-496a-8dd0-b5cbcfffefb5] Running
	I1227 10:29:53.125830  505333 system_pods.go:89] "kube-controller-manager-no-preload-241090" [12e95943-625c-4a69-aeff-d4364483de48] Running
	I1227 10:29:53.125834  505333 system_pods.go:89] "kube-proxy-8xv88" [ffe92c3b-92ca-41f8-91a8-2c0983689068] Running
	I1227 10:29:53.125839  505333 system_pods.go:89] "kube-scheduler-no-preload-241090" [55ff2824-5114-426e-a833-df3be58eee18] Running
	I1227 10:29:53.125844  505333 system_pods.go:89] "storage-provisioner" [4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf] Running
	I1227 10:29:53.125852  505333 system_pods.go:126] duration metric: took 657.06939ms to wait for k8s-apps to be running ...
	I1227 10:29:53.125868  505333 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:29:53.125926  505333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:29:53.141959  505333 system_svc.go:56] duration metric: took 16.07981ms WaitForService to wait for kubelet
	I1227 10:29:53.141994  505333 kubeadm.go:587] duration metric: took 16.038565745s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:29:53.142014  505333 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:29:53.145551  505333 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:29:53.145581  505333 node_conditions.go:123] node cpu capacity is 2
	I1227 10:29:53.145595  505333 node_conditions.go:105] duration metric: took 3.574945ms to run NodePressure ...
	I1227 10:29:53.145644  505333 start.go:242] waiting for startup goroutines ...
	I1227 10:29:53.145655  505333 start.go:247] waiting for cluster config update ...
	I1227 10:29:53.145675  505333 start.go:256] writing updated cluster config ...
	I1227 10:29:53.145989  505333 ssh_runner.go:195] Run: rm -f paused
	I1227 10:29:53.150860  505333 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:29:53.159213  505333 pod_ready.go:83] waiting for pod "coredns-7d764666f9-5p545" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:29:50.781436  508852 pod_ready.go:104] pod "coredns-7d764666f9-t88nq" is not "Ready", error: <nil>
	W1227 10:29:53.279272  508852 pod_ready.go:104] pod "coredns-7d764666f9-t88nq" is not "Ready", error: <nil>
	I1227 10:29:53.666007  505333 pod_ready.go:94] pod "coredns-7d764666f9-5p545" is "Ready"
	I1227 10:29:53.666089  505333 pod_ready.go:86] duration metric: took 506.851538ms for pod "coredns-7d764666f9-5p545" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:53.671059  505333 pod_ready.go:83] waiting for pod "etcd-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:53.679100  505333 pod_ready.go:94] pod "etcd-no-preload-241090" is "Ready"
	I1227 10:29:53.679126  505333 pod_ready.go:86] duration metric: took 8.04205ms for pod "etcd-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:53.682986  505333 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:53.690569  505333 pod_ready.go:94] pod "kube-apiserver-no-preload-241090" is "Ready"
	I1227 10:29:53.690646  505333 pod_ready.go:86] duration metric: took 7.636393ms for pod "kube-apiserver-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:53.695406  505333 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:53.956342  505333 pod_ready.go:94] pod "kube-controller-manager-no-preload-241090" is "Ready"
	I1227 10:29:53.956423  505333 pod_ready.go:86] duration metric: took 260.936266ms for pod "kube-controller-manager-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:54.155640  505333 pod_ready.go:83] waiting for pod "kube-proxy-8xv88" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:54.556133  505333 pod_ready.go:94] pod "kube-proxy-8xv88" is "Ready"
	I1227 10:29:54.556159  505333 pod_ready.go:86] duration metric: took 400.495162ms for pod "kube-proxy-8xv88" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:54.755128  505333 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:55.155389  505333 pod_ready.go:94] pod "kube-scheduler-no-preload-241090" is "Ready"
	I1227 10:29:55.155422  505333 pod_ready.go:86] duration metric: took 400.269847ms for pod "kube-scheduler-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:29:55.155436  505333 pod_ready.go:40] duration metric: took 2.004544393s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:29:55.233688  505333 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:29:55.236952  505333 out.go:203] 
	W1227 10:29:55.240603  505333 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:29:55.244157  505333 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:29:55.248714  505333 out.go:179] * Done! kubectl is now configured to use "no-preload-241090" cluster and "default" namespace by default
	W1227 10:29:55.301189  508852 pod_ready.go:104] pod "coredns-7d764666f9-t88nq" is not "Ready", error: <nil>
	W1227 10:29:57.792656  508852 pod_ready.go:104] pod "coredns-7d764666f9-t88nq" is not "Ready", error: <nil>
	W1227 10:30:00.292405  508852 pod_ready.go:104] pod "coredns-7d764666f9-t88nq" is not "Ready", error: <nil>
	W1227 10:30:02.778653  508852 pod_ready.go:104] pod "coredns-7d764666f9-t88nq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 10:29:52 no-preload-241090 crio[836]: time="2025-12-27T10:29:52.815477904Z" level=info msg="Created container aaec7878b4348d04dd8ec068c291fc3a874c29815131e6b403e21da229e4b155: kube-system/coredns-7d764666f9-5p545/coredns" id=d5f89c18-0139-4505-992a-4f8221fdf163 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:29:52 no-preload-241090 crio[836]: time="2025-12-27T10:29:52.816654211Z" level=info msg="Starting container: aaec7878b4348d04dd8ec068c291fc3a874c29815131e6b403e21da229e4b155" id=f3a766d6-71b7-4c48-b5ce-2facf7a16320 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:29:52 no-preload-241090 crio[836]: time="2025-12-27T10:29:52.82045818Z" level=info msg="Started container" PID=2424 containerID=aaec7878b4348d04dd8ec068c291fc3a874c29815131e6b403e21da229e4b155 description=kube-system/coredns-7d764666f9-5p545/coredns id=f3a766d6-71b7-4c48-b5ce-2facf7a16320 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c64cfcc256f1e49c27713191dd554251d68960d6fb43ce1e34b71f26570f78fa
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.7998774Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bd541921-9dd8-4024-bed2-5041b514c88e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.799950697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.809352048Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:46ad4546ea5fa6ba7d141c72d28281b1159bfb647889c1d5084f42250d4693aa UID:67c83974-917e-46ed-b633-b33ab87382c0 NetNS:/var/run/netns/712747ef-5d64-4182-b6f4-6007d4cfbada Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079e58}] Aliases:map[]}"
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.809395084Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.826580686Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:46ad4546ea5fa6ba7d141c72d28281b1159bfb647889c1d5084f42250d4693aa UID:67c83974-917e-46ed-b633-b33ab87382c0 NetNS:/var/run/netns/712747ef-5d64-4182-b6f4-6007d4cfbada Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079e58}] Aliases:map[]}"
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.826929244Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.833015123Z" level=info msg="Ran pod sandbox 46ad4546ea5fa6ba7d141c72d28281b1159bfb647889c1d5084f42250d4693aa with infra container: default/busybox/POD" id=bd541921-9dd8-4024-bed2-5041b514c88e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.839371281Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ab1f6961-667f-4015-b422-a4ee0fafbeab name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.839750026Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ab1f6961-667f-4015-b422-a4ee0fafbeab name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.840575125Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ab1f6961-667f-4015-b422-a4ee0fafbeab name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.842104109Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ab6ab82b-7644-4143-9ef1-6500b33c3b54 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:29:55 no-preload-241090 crio[836]: time="2025-12-27T10:29:55.843849998Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 27 10:29:58 no-preload-241090 crio[836]: time="2025-12-27T10:29:58.048575901Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=ab6ab82b-7644-4143-9ef1-6500b33c3b54 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:29:58 no-preload-241090 crio[836]: time="2025-12-27T10:29:58.049218746Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ac67d9c1-4dc2-4c2e-b5d1-a788910b6afd name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:29:58 no-preload-241090 crio[836]: time="2025-12-27T10:29:58.051326518Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0273e7c8-7204-4abc-b10a-e1c38c437bc6 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:29:58 no-preload-241090 crio[836]: time="2025-12-27T10:29:58.058964339Z" level=info msg="Creating container: default/busybox/busybox" id=48b0c5fb-4bc8-4ab7-80a5-b8ab16529fd3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:29:58 no-preload-241090 crio[836]: time="2025-12-27T10:29:58.059119852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:29:58 no-preload-241090 crio[836]: time="2025-12-27T10:29:58.067556795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:29:58 no-preload-241090 crio[836]: time="2025-12-27T10:29:58.070207603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:29:58 no-preload-241090 crio[836]: time="2025-12-27T10:29:58.093536061Z" level=info msg="Created container c7f02e04dff07c8d6dd8ef4991ff45d7378f144686030ab9b8eb313cf2d08bac: default/busybox/busybox" id=48b0c5fb-4bc8-4ab7-80a5-b8ab16529fd3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:29:58 no-preload-241090 crio[836]: time="2025-12-27T10:29:58.099227893Z" level=info msg="Starting container: c7f02e04dff07c8d6dd8ef4991ff45d7378f144686030ab9b8eb313cf2d08bac" id=20bacdb3-61b5-4c56-b94f-e0f5dd501192 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:29:58 no-preload-241090 crio[836]: time="2025-12-27T10:29:58.101892068Z" level=info msg="Started container" PID=2476 containerID=c7f02e04dff07c8d6dd8ef4991ff45d7378f144686030ab9b8eb313cf2d08bac description=default/busybox/busybox id=20bacdb3-61b5-4c56-b94f-e0f5dd501192 name=/runtime.v1.RuntimeService/StartContainer sandboxID=46ad4546ea5fa6ba7d141c72d28281b1159bfb647889c1d5084f42250d4693aa
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c7f02e04dff07       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   46ad4546ea5fa       busybox                                     default
	aaec7878b4348       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      14 seconds ago      Running             coredns                   0                   c64cfcc256f1e       coredns-7d764666f9-5p545                    kube-system
	cd5e24c721c25       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   e51b8170d839f       storage-provisioner                         kube-system
	d90ec9dbaabdf       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    25 seconds ago      Running             kindnet-cni               0                   fae2ac804cfe5       kindnet-jh987                               kube-system
	2a309354c0ba7       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      29 seconds ago      Running             kube-proxy                0                   cc44acbe4a003       kube-proxy-8xv88                            kube-system
	9f01bfec022ab       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      40 seconds ago      Running             kube-scheduler            0                   83b449df445b7       kube-scheduler-no-preload-241090            kube-system
	2e0fb446e624f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      40 seconds ago      Running             kube-apiserver            0                   355216ce062bc       kube-apiserver-no-preload-241090            kube-system
	c8871b2b839ab       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      40 seconds ago      Running             etcd                      0                   c9f9bb5c77f7e       etcd-no-preload-241090                      kube-system
	9600ec62faa94       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      40 seconds ago      Running             kube-controller-manager   0                   3d566d5863a37       kube-controller-manager-no-preload-241090   kube-system
	
	
	==> coredns [aaec7878b4348d04dd8ec068c291fc3a874c29815131e6b403e21da229e4b155] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52115 - 12082 "HINFO IN 1594194678469271948.1881815117037364572. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003751045s
	
	
	==> describe nodes <==
	Name:               no-preload-241090
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-241090
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=no-preload-241090
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_29_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:29:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-241090
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:30:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:30:02 +0000   Sat, 27 Dec 2025 10:29:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:30:02 +0000   Sat, 27 Dec 2025 10:29:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:30:02 +0000   Sat, 27 Dec 2025 10:29:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:30:02 +0000   Sat, 27 Dec 2025 10:29:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-241090
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                a9a49f95-a33e-4498-b8f5-c7af217c180a
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-7d764666f9-5p545                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-241090                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-jh987                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-241090             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-241090    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-8xv88                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-241090             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  32s   node-controller  Node no-preload-241090 event: Registered Node no-preload-241090 in Controller
	
	
	==> dmesg <==
	[ +41.318304] overlayfs: idmapped layers are currently not supported
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	[Dec27 10:28] overlayfs: idmapped layers are currently not supported
	[Dec27 10:29] overlayfs: idmapped layers are currently not supported
	[ +34.978626] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c8871b2b839abbf0da0fcb997916bb551b50a4310c3c3f81b142c30dcf6109af] <==
	{"level":"info","ts":"2025-12-27T10:29:26.660340Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:29:26.924133Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T10:29:26.924258Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T10:29:26.924368Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-27T10:29:26.924449Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:29:26.924490Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:29:26.928000Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:29:26.928084Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:29:26.928154Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-27T10:29:26.928189Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:29:26.929791Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:29:26.932229Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-241090 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:29:26.932398Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:29:26.945077Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:29:26.961769Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:29:26.961881Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:29:26.962626Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:29:26.973086Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:29:26.981979Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:29:26.982177Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:29:26.984047Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:29:26.984142Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T10:29:26.984243Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T10:29:26.984517Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:29:26.984902Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 10:30:07 up  2:12,  0 user,  load average: 4.68, 2.59, 2.16
	Linux no-preload-241090 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d90ec9dbaabdf34166e0f7e8a4e79c06a7c6dba913375bf793e7dcdaf5a526f1] <==
	I1227 10:29:41.741849       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:29:41.742278       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:29:41.742469       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:29:41.742529       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:29:41.742569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:29:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:29:41.924250       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:29:41.924278       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:29:41.924288       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:29:41.927188       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:29:42.224904       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:29:42.225009       1 metrics.go:72] Registering metrics
	I1227 10:29:42.225091       1 controller.go:711] "Syncing nftables rules"
	I1227 10:29:51.924311       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:29:51.924504       1 main.go:301] handling current node
	I1227 10:30:01.924056       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:30:01.924188       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e0fb446e624ff468233bb7b5ea9d838d27dc0b350db83fb4b0249f0d06b1bca] <==
	I1227 10:29:29.331506       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:29:29.331538       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:29:29.360263       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:29:29.365493       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:29:29.367142       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:29:29.395662       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:29:29.398049       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:29:30.035497       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 10:29:30.049271       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 10:29:30.049613       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:29:30.799272       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:29:30.849000       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:29:30.946058       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 10:29:30.954338       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1227 10:29:30.955706       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:29:30.961132       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:29:31.144215       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:29:31.863489       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:29:31.884955       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 10:29:31.899165       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 10:29:36.648690       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:29:36.655799       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:29:36.746956       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:29:37.171217       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1227 10:30:05.628638       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35378: use of closed network connection
	
	
	==> kube-controller-manager [9600ec62faa94bed1d4dc3ed80437a8b8fa48c17930b6be79eebcd85a8c52482] <==
	I1227 10:29:35.958025       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.958066       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.958099       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.958321       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.958662       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.958701       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.959794       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.959893       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.968239       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.969536       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.970386       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.970492       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.983056       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.983151       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.983163       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.983415       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.983433       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:35.983448       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:36.037322       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-241090" podCIDRs=["10.244.0.0/24"]
	I1227 10:29:36.067333       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:29:36.164214       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:36.164242       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:29:36.164248       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:29:36.168276       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:55.961172       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [2a309354c0ba7ed844e7b7961bfb30459be95d7b60c34d49e63f4739aa19214b] <==
	I1227 10:29:37.876929       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:29:38.094570       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:29:38.224040       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:38.224079       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 10:29:38.224154       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:29:38.373382       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:29:38.373434       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:29:38.385941       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:29:38.386252       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:29:38.386269       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:29:38.387626       1 config.go:200] "Starting service config controller"
	I1227 10:29:38.387636       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:29:38.387652       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:29:38.387656       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:29:38.387666       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:29:38.387669       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:29:38.392575       1 config.go:309] "Starting node config controller"
	I1227 10:29:38.392592       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:29:38.392599       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:29:38.488026       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:29:38.488055       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:29:38.488087       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9f01bfec022ab6b1bb1503ebc2f7a4a1b2dae784848e68e795d85bb6a61c4025] <==
	E1227 10:29:29.313147       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:29:29.313193       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:29:29.313240       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:29:29.313293       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:29:29.313480       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:29:29.313516       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:29:29.313567       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:29:29.313612       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:29:29.315579       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:29:29.315661       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:29:29.315698       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:29:29.315728       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:29:29.316071       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:29:30.137522       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:29:30.170397       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:29:30.173135       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:29:30.286756       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:29:30.310802       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:29:30.354240       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:29:30.379565       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:29:30.447659       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:29:30.467187       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:29:30.570234       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:29:30.765819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1227 10:29:33.393948       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:29:37 no-preload-241090 kubelet[1938]: I1227 10:29:37.528538    1938 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:29:37 no-preload-241090 kubelet[1938]: W1227 10:29:37.609758    1938 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/crio-fae2ac804cfe590365a5fc48ab6102f3fa71762022ad6560b4e9e715d20e2bd5 WatchSource:0}: Error finding container fae2ac804cfe590365a5fc48ab6102f3fa71762022ad6560b4e9e715d20e2bd5: Status 404 returned error can't find the container with id fae2ac804cfe590365a5fc48ab6102f3fa71762022ad6560b4e9e715d20e2bd5
	Dec 27 10:29:38 no-preload-241090 kubelet[1938]: E1227 10:29:38.297960    1938 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-241090" containerName="kube-scheduler"
	Dec 27 10:29:38 no-preload-241090 kubelet[1938]: I1227 10:29:38.344538    1938 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-8xv88" podStartSLOduration=1.344523671 podStartE2EDuration="1.344523671s" podCreationTimestamp="2025-12-27 10:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:29:37.982284218 +0000 UTC m=+6.294014832" watchObservedRunningTime="2025-12-27 10:29:38.344523671 +0000 UTC m=+6.656254285"
	Dec 27 10:29:41 no-preload-241090 kubelet[1938]: E1227 10:29:41.424949    1938 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-241090" containerName="kube-controller-manager"
	Dec 27 10:29:41 no-preload-241090 kubelet[1938]: E1227 10:29:41.701910    1938 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-241090" containerName="etcd"
	Dec 27 10:29:42 no-preload-241090 kubelet[1938]: E1227 10:29:42.653161    1938 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-241090" containerName="kube-apiserver"
	Dec 27 10:29:42 no-preload-241090 kubelet[1938]: I1227 10:29:42.677857    1938 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-jh987" podStartSLOduration=1.753316385 podStartE2EDuration="5.677841429s" podCreationTimestamp="2025-12-27 10:29:37 +0000 UTC" firstStartedPulling="2025-12-27 10:29:37.644298262 +0000 UTC m=+5.956028868" lastFinishedPulling="2025-12-27 10:29:41.568823306 +0000 UTC m=+9.880553912" observedRunningTime="2025-12-27 10:29:42.060762716 +0000 UTC m=+10.372493330" watchObservedRunningTime="2025-12-27 10:29:42.677841429 +0000 UTC m=+10.989572035"
	Dec 27 10:29:43 no-preload-241090 kubelet[1938]: E1227 10:29:43.035673    1938 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-241090" containerName="kube-apiserver"
	Dec 27 10:29:48 no-preload-241090 kubelet[1938]: E1227 10:29:48.306831    1938 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-241090" containerName="kube-scheduler"
	Dec 27 10:29:51 no-preload-241090 kubelet[1938]: E1227 10:29:51.439645    1938 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-241090" containerName="kube-controller-manager"
	Dec 27 10:29:51 no-preload-241090 kubelet[1938]: E1227 10:29:51.703047    1938 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-241090" containerName="etcd"
	Dec 27 10:29:52 no-preload-241090 kubelet[1938]: I1227 10:29:52.301500    1938 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 27 10:29:52 no-preload-241090 kubelet[1938]: I1227 10:29:52.422441    1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4-config-volume\") pod \"coredns-7d764666f9-5p545\" (UID: \"0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4\") " pod="kube-system/coredns-7d764666f9-5p545"
	Dec 27 10:29:52 no-preload-241090 kubelet[1938]: I1227 10:29:52.422778    1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nvlq\" (UniqueName: \"kubernetes.io/projected/4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf-kube-api-access-2nvlq\") pod \"storage-provisioner\" (UID: \"4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf\") " pod="kube-system/storage-provisioner"
	Dec 27 10:29:52 no-preload-241090 kubelet[1938]: I1227 10:29:52.422976    1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxk2c\" (UniqueName: \"kubernetes.io/projected/0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4-kube-api-access-qxk2c\") pod \"coredns-7d764666f9-5p545\" (UID: \"0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4\") " pod="kube-system/coredns-7d764666f9-5p545"
	Dec 27 10:29:52 no-preload-241090 kubelet[1938]: I1227 10:29:52.423149    1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf-tmp\") pod \"storage-provisioner\" (UID: \"4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf\") " pod="kube-system/storage-provisioner"
	Dec 27 10:29:52 no-preload-241090 kubelet[1938]: W1227 10:29:52.732263    1938 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/crio-c64cfcc256f1e49c27713191dd554251d68960d6fb43ce1e34b71f26570f78fa WatchSource:0}: Error finding container c64cfcc256f1e49c27713191dd554251d68960d6fb43ce1e34b71f26570f78fa: Status 404 returned error can't find the container with id c64cfcc256f1e49c27713191dd554251d68960d6fb43ce1e34b71f26570f78fa
	Dec 27 10:29:53 no-preload-241090 kubelet[1938]: E1227 10:29:53.067359    1938 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5p545" containerName="coredns"
	Dec 27 10:29:53 no-preload-241090 kubelet[1938]: I1227 10:29:53.133265    1938 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-5p545" podStartSLOduration=16.133243694 podStartE2EDuration="16.133243694s" podCreationTimestamp="2025-12-27 10:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:29:53.094359131 +0000 UTC m=+21.406089745" watchObservedRunningTime="2025-12-27 10:29:53.133243694 +0000 UTC m=+21.444974299"
	Dec 27 10:29:53 no-preload-241090 kubelet[1938]: I1227 10:29:53.184047    1938 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.184029923 podStartE2EDuration="15.184029923s" podCreationTimestamp="2025-12-27 10:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:29:53.135185621 +0000 UTC m=+21.446916227" watchObservedRunningTime="2025-12-27 10:29:53.184029923 +0000 UTC m=+21.495760546"
	Dec 27 10:29:54 no-preload-241090 kubelet[1938]: E1227 10:29:54.072643    1938 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5p545" containerName="coredns"
	Dec 27 10:29:55 no-preload-241090 kubelet[1938]: E1227 10:29:55.075049    1938 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5p545" containerName="coredns"
	Dec 27 10:29:55 no-preload-241090 kubelet[1938]: I1227 10:29:55.545029    1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88hs\" (UniqueName: \"kubernetes.io/projected/67c83974-917e-46ed-b633-b33ab87382c0-kube-api-access-q88hs\") pod \"busybox\" (UID: \"67c83974-917e-46ed-b633-b33ab87382c0\") " pod="default/busybox"
	Dec 27 10:29:55 no-preload-241090 kubelet[1938]: W1227 10:29:55.833258    1938 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/crio-46ad4546ea5fa6ba7d141c72d28281b1159bfb647889c1d5084f42250d4693aa WatchSource:0}: Error finding container 46ad4546ea5fa6ba7d141c72d28281b1159bfb647889c1d5084f42250d4693aa: Status 404 returned error can't find the container with id 46ad4546ea5fa6ba7d141c72d28281b1159bfb647889c1d5084f42250d4693aa
	
	
	==> storage-provisioner [cd5e24c721c25cf5be40cc06eb57b4d96e144e784afd395bcbb29fe97fad50b7] <==
	I1227 10:29:52.773422       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:29:52.795460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:29:52.795510       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:29:52.800782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:52.813724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:29:52.814040       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:29:52.820544       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-241090_5d0a81aa-76d2-40ea-9287-4557e659bf1b!
	I1227 10:29:52.827694       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b161aa6-6257-4755-8180-933059c7757e", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-241090_5d0a81aa-76d2-40ea-9287-4557e659bf1b became leader
	W1227 10:29:52.848834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:52.858821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:29:52.921958       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-241090_5d0a81aa-76d2-40ea-9287-4557e659bf1b!
	W1227 10:29:54.862652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:54.867757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:56.870956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:56.876186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:58.879460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:29:58.885456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:00.889232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:00.901363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:02.904929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:02.911900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:04.915559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:04.920365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:06.924987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:06.932996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241090 -n no-preload-241090
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-241090 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-367691 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-367691 --alsologtostderr -v=1: exit status 80 (2.111735286s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-367691 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:30:35.643504  514191 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:30:35.643640  514191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:30:35.643645  514191 out.go:374] Setting ErrFile to fd 2...
	I1227 10:30:35.643651  514191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:30:35.643909  514191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:30:35.644253  514191 out.go:368] Setting JSON to false
	I1227 10:30:35.644276  514191 mustload.go:66] Loading cluster: embed-certs-367691
	I1227 10:30:35.644750  514191 config.go:182] Loaded profile config "embed-certs-367691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:35.645381  514191 cli_runner.go:164] Run: docker container inspect embed-certs-367691 --format={{.State.Status}}
	I1227 10:30:35.663289  514191 host.go:66] Checking if "embed-certs-367691" exists ...
	I1227 10:30:35.663613  514191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:30:35.729132  514191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-27 10:30:35.719383626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:30:35.729785  514191 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-367691 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:30:35.733745  514191 out.go:179] * Pausing node embed-certs-367691 ... 
	I1227 10:30:35.737525  514191 host.go:66] Checking if "embed-certs-367691" exists ...
	I1227 10:30:35.737888  514191 ssh_runner.go:195] Run: systemctl --version
	I1227 10:30:35.737935  514191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-367691
	I1227 10:30:35.757780  514191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/embed-certs-367691/id_rsa Username:docker}
	I1227 10:30:35.860646  514191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:30:35.885823  514191 pause.go:52] kubelet running: true
	I1227 10:30:35.885897  514191 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:30:36.172798  514191 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:30:36.172880  514191 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:30:36.295847  514191 cri.go:96] found id: "ea8f6e14e817184fe51d0f5957f0409363ca444db8803a84523e0966ea183e7f"
	I1227 10:30:36.295927  514191 cri.go:96] found id: "7da2abbec178f9d68b05cbeba7d3f44de1cc48a998eac2bb77b106a98e4f6efb"
	I1227 10:30:36.295952  514191 cri.go:96] found id: "ee29ffd6fc0fea1a7798eaff2aac02990f393036d328855d43f123ce95af833f"
	I1227 10:30:36.295994  514191 cri.go:96] found id: "40795bfae7ba1aa8d709cee7fd131cac1b3e5cf4104406b7db981778a9131eb0"
	I1227 10:30:36.296014  514191 cri.go:96] found id: "8cd86734bcd509d8f341a879f4a8b5dd15f4639c8986a100aec8b8c61e2c100f"
	I1227 10:30:36.296049  514191 cri.go:96] found id: "7c5d69297d8771e299ca8b09a7cae96c2e7c5f87879fd1c567112742214e35f3"
	I1227 10:30:36.296087  514191 cri.go:96] found id: "56a9c2fee9e20cf978b01d8726c038b4c28e466158c9a28f5f3fdc75e851a27d"
	I1227 10:30:36.296108  514191 cri.go:96] found id: "a108df6f898b109467fab72294c0412641c1e5d2b2ea82f9edf2b1b962883dcf"
	I1227 10:30:36.296129  514191 cri.go:96] found id: "8c4fb8d9010ff30eec94b0fbcdd2a5948b473223b17ca2f2a4b0ce18bedff071"
	I1227 10:30:36.296168  514191 cri.go:96] found id: "d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906"
	I1227 10:30:36.296193  514191 cri.go:96] found id: "f2f8c2294cb00b78dfda28e1049259ceffb42f141d0b7661ada337ade587fa06"
	I1227 10:30:36.296214  514191 cri.go:96] found id: ""
	I1227 10:30:36.296295  514191 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:30:36.310947  514191 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:30:36Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:30:36.511264  514191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:30:36.528228  514191 pause.go:52] kubelet running: false
	I1227 10:30:36.528305  514191 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:30:36.752956  514191 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:30:36.753044  514191 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:30:36.823426  514191 cri.go:96] found id: "ea8f6e14e817184fe51d0f5957f0409363ca444db8803a84523e0966ea183e7f"
	I1227 10:30:36.823501  514191 cri.go:96] found id: "7da2abbec178f9d68b05cbeba7d3f44de1cc48a998eac2bb77b106a98e4f6efb"
	I1227 10:30:36.823522  514191 cri.go:96] found id: "ee29ffd6fc0fea1a7798eaff2aac02990f393036d328855d43f123ce95af833f"
	I1227 10:30:36.823545  514191 cri.go:96] found id: "40795bfae7ba1aa8d709cee7fd131cac1b3e5cf4104406b7db981778a9131eb0"
	I1227 10:30:36.823582  514191 cri.go:96] found id: "8cd86734bcd509d8f341a879f4a8b5dd15f4639c8986a100aec8b8c61e2c100f"
	I1227 10:30:36.823605  514191 cri.go:96] found id: "7c5d69297d8771e299ca8b09a7cae96c2e7c5f87879fd1c567112742214e35f3"
	I1227 10:30:36.823625  514191 cri.go:96] found id: "56a9c2fee9e20cf978b01d8726c038b4c28e466158c9a28f5f3fdc75e851a27d"
	I1227 10:30:36.823663  514191 cri.go:96] found id: "a108df6f898b109467fab72294c0412641c1e5d2b2ea82f9edf2b1b962883dcf"
	I1227 10:30:36.823693  514191 cri.go:96] found id: "8c4fb8d9010ff30eec94b0fbcdd2a5948b473223b17ca2f2a4b0ce18bedff071"
	I1227 10:30:36.823715  514191 cri.go:96] found id: "d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906"
	I1227 10:30:36.823749  514191 cri.go:96] found id: "f2f8c2294cb00b78dfda28e1049259ceffb42f141d0b7661ada337ade587fa06"
	I1227 10:30:36.823773  514191 cri.go:96] found id: ""
	I1227 10:30:36.823874  514191 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:30:37.255794  514191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:30:37.280044  514191 pause.go:52] kubelet running: false
	I1227 10:30:37.280126  514191 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:30:37.542321  514191 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:30:37.542449  514191 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:30:37.662531  514191 cri.go:96] found id: "ea8f6e14e817184fe51d0f5957f0409363ca444db8803a84523e0966ea183e7f"
	I1227 10:30:37.662566  514191 cri.go:96] found id: "7da2abbec178f9d68b05cbeba7d3f44de1cc48a998eac2bb77b106a98e4f6efb"
	I1227 10:30:37.662571  514191 cri.go:96] found id: "ee29ffd6fc0fea1a7798eaff2aac02990f393036d328855d43f123ce95af833f"
	I1227 10:30:37.662575  514191 cri.go:96] found id: "40795bfae7ba1aa8d709cee7fd131cac1b3e5cf4104406b7db981778a9131eb0"
	I1227 10:30:37.662578  514191 cri.go:96] found id: "8cd86734bcd509d8f341a879f4a8b5dd15f4639c8986a100aec8b8c61e2c100f"
	I1227 10:30:37.662581  514191 cri.go:96] found id: "7c5d69297d8771e299ca8b09a7cae96c2e7c5f87879fd1c567112742214e35f3"
	I1227 10:30:37.662584  514191 cri.go:96] found id: "56a9c2fee9e20cf978b01d8726c038b4c28e466158c9a28f5f3fdc75e851a27d"
	I1227 10:30:37.662587  514191 cri.go:96] found id: "a108df6f898b109467fab72294c0412641c1e5d2b2ea82f9edf2b1b962883dcf"
	I1227 10:30:37.662590  514191 cri.go:96] found id: "8c4fb8d9010ff30eec94b0fbcdd2a5948b473223b17ca2f2a4b0ce18bedff071"
	I1227 10:30:37.662596  514191 cri.go:96] found id: "d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906"
	I1227 10:30:37.662599  514191 cri.go:96] found id: "f2f8c2294cb00b78dfda28e1049259ceffb42f141d0b7661ada337ade587fa06"
	I1227 10:30:37.662602  514191 cri.go:96] found id: ""
	I1227 10:30:37.662666  514191 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:30:37.689702  514191 out.go:203] 
	W1227 10:30:37.692755  514191 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:30:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:30:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:30:37.692784  514191 out.go:285] * 
	* 
	W1227 10:30:37.695596  514191 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:30:37.699508  514191 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-367691 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-367691
helpers_test.go:244: (dbg) docker inspect embed-certs-367691:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857",
	        "Created": "2025-12-27T10:28:25.951096938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 508988,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:29:34.236338146Z",
	            "FinishedAt": "2025-12-27T10:29:33.279104768Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/hostname",
	        "HostsPath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/hosts",
	        "LogPath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857-json.log",
	        "Name": "/embed-certs-367691",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-367691:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-367691",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857",
	                "LowerDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-367691",
	                "Source": "/var/lib/docker/volumes/embed-certs-367691/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-367691",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-367691",
	                "name.minikube.sigs.k8s.io": "embed-certs-367691",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9153ae2bf1ae226b1a6dc45857fdc150d1d90d17b4fefc387f4edfd98dddeb66",
	            "SandboxKey": "/var/run/docker/netns/9153ae2bf1ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-367691": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:8c:b3:5a:d8:6f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d03ce9bfd46e85bbc9765f774251ba284121a67953c86059ad99286cf88212c",
	                    "EndpointID": "673ef0b4beaf14d5aea1880f9d5f46f18ce3288841ad8417515c29249fb12005",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-367691",
	                        "d75458839d4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-367691 -n embed-certs-367691
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-367691 -n embed-certs-367691: exit status 2 (493.122635ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-367691 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-367691 logs -n 25: (1.804758838s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-784377 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-784377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ image   │ default-k8s-diff-port-784377 image list --format=json                                                                                                                    │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ pause   │ -p default-k8s-diff-port-784377 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                          │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                          │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ ssh     │ force-systemd-flag-915850 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p force-systemd-flag-915850                                                                                                                                             │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p disable-driver-mounts-913868                                                                                                                                          │ disable-driver-mounts-913868 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable metrics-server -p embed-certs-367691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │                     │
	│ stop    │ -p embed-certs-367691 --alsologtostderr -v=3                                                                                                                             │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable dashboard -p embed-certs-367691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable metrics-server -p no-preload-241090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ stop    │ -p no-preload-241090 --alsologtostderr -v=3                                                                                                                              │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable dashboard -p no-preload-241090 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ image   │ embed-certs-367691 image list --format=json                                                                                                                              │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ pause   │ -p embed-certs-367691 --alsologtostderr -v=1                                                                                                                             │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:30:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:30:20.538352  512231 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:30:20.538617  512231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:30:20.538649  512231 out.go:374] Setting ErrFile to fd 2...
	I1227 10:30:20.538708  512231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:30:20.539026  512231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:30:20.539526  512231 out.go:368] Setting JSON to false
	I1227 10:30:20.540810  512231 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7974,"bootTime":1766823447,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:30:20.540917  512231 start.go:143] virtualization:  
	I1227 10:30:20.543921  512231 out.go:179] * [no-preload-241090] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:30:20.548091  512231 notify.go:221] Checking for updates...
	I1227 10:30:20.548113  512231 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:30:20.552128  512231 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:30:20.555152  512231 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:30:20.558125  512231 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:30:20.561672  512231 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:30:20.564736  512231 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:30:20.568225  512231 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:20.568825  512231 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:30:20.590086  512231 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:30:20.590280  512231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:30:20.651335  512231 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:30:20.641524756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:30:20.651449  512231 docker.go:319] overlay module found
	I1227 10:30:20.655120  512231 out.go:179] * Using the docker driver based on existing profile
	I1227 10:30:20.658033  512231 start.go:309] selected driver: docker
	I1227 10:30:20.658061  512231 start.go:928] validating driver "docker" against &{Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:30:20.658182  512231 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:30:20.658941  512231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:30:20.716221  512231 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:30:20.705971211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:30:20.716563  512231 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:30:20.716601  512231 cni.go:84] Creating CNI manager for ""
	I1227 10:30:20.716665  512231 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:30:20.716707  512231 start.go:353] cluster config:
	{Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:30:20.720072  512231 out.go:179] * Starting "no-preload-241090" primary control-plane node in "no-preload-241090" cluster
	I1227 10:30:20.723010  512231 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:30:20.726057  512231 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:30:20.728885  512231 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:30:20.728972  512231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:30:20.729050  512231 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/config.json ...
	I1227 10:30:20.729336  512231 cache.go:107] acquiring lock: {Name:mk20c624f37c3909dde5a8d589ecabaa6d57d038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.729473  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1227 10:30:20.729501  512231 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 172.137µs
	I1227 10:30:20.729533  512231 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1227 10:30:20.729564  512231 cache.go:107] acquiring lock: {Name:mkbb24fa4343d0a35603cb19aa6239dff4f2f276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.729621  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1227 10:30:20.729649  512231 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 86.91µs
	I1227 10:30:20.729671  512231 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1227 10:30:20.729697  512231 cache.go:107] acquiring lock: {Name:mk4c45856071606c8af5d7273166a2f1bb9ddc55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.729747  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1227 10:30:20.729775  512231 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 78.991µs
	I1227 10:30:20.729796  512231 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1227 10:30:20.729836  512231 cache.go:107] acquiring lock: {Name:mkf9b1edb58a976305f282f57eeb11e80f0b7bb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.729929  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1227 10:30:20.729953  512231 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 130.84µs
	I1227 10:30:20.729996  512231 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1227 10:30:20.730025  512231 cache.go:107] acquiring lock: {Name:mkf98c62b88cf915fe929ba90cd6ed029cecc870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.730079  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1227 10:30:20.730112  512231 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 84.325µs
	I1227 10:30:20.730134  512231 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1227 10:30:20.730159  512231 cache.go:107] acquiring lock: {Name:mka12fccf8e2bbc0ccc499614d0ccb8a211e1cb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.730209  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1227 10:30:20.730229  512231 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 71.049µs
	I1227 10:30:20.730253  512231 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1227 10:30:20.730281  512231 cache.go:107] acquiring lock: {Name:mk2a8f120e089d53474aed758c34eb39d391985d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.730329  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1227 10:30:20.730349  512231 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 72.912µs
	I1227 10:30:20.730369  512231 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1227 10:30:20.730400  512231 cache.go:107] acquiring lock: {Name:mk262c37486fa86829e275f8385c93b0718c0ef2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.730456  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1227 10:30:20.730476  512231 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 78.573µs
	I1227 10:30:20.730501  512231 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1227 10:30:20.730524  512231 cache.go:87] Successfully saved all images to host disk.
	I1227 10:30:20.751280  512231 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:30:20.751306  512231 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:30:20.751330  512231 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:30:20.751364  512231 start.go:360] acquireMachinesLock for no-preload-241090: {Name:mk51902d6c01d44d9c13da3d668b0d82e1b30c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.751434  512231 start.go:364] duration metric: took 48.288µs to acquireMachinesLock for "no-preload-241090"
	I1227 10:30:20.751458  512231 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:30:20.751469  512231 fix.go:54] fixHost starting: 
	I1227 10:30:20.751760  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:20.768965  512231 fix.go:112] recreateIfNeeded on no-preload-241090: state=Stopped err=<nil>
	W1227 10:30:20.768996  512231 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 10:30:19.779235  508852 pod_ready.go:104] pod "coredns-7d764666f9-t88nq" is not "Ready", error: <nil>
	I1227 10:30:21.280352  508852 pod_ready.go:94] pod "coredns-7d764666f9-t88nq" is "Ready"
	I1227 10:30:21.280378  508852 pod_ready.go:86] duration metric: took 32.507176806s for pod "coredns-7d764666f9-t88nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.285031  508852 pod_ready.go:83] waiting for pod "etcd-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.291920  508852 pod_ready.go:94] pod "etcd-embed-certs-367691" is "Ready"
	I1227 10:30:21.292024  508852 pod_ready.go:86] duration metric: took 6.9694ms for pod "etcd-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.294509  508852 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.299446  508852 pod_ready.go:94] pod "kube-apiserver-embed-certs-367691" is "Ready"
	I1227 10:30:21.299515  508852 pod_ready.go:86] duration metric: took 4.981228ms for pod "kube-apiserver-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.302554  508852 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.479699  508852 pod_ready.go:94] pod "kube-controller-manager-embed-certs-367691" is "Ready"
	I1227 10:30:21.479787  508852 pod_ready.go:86] duration metric: took 177.207952ms for pod "kube-controller-manager-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.677135  508852 pod_ready.go:83] waiting for pod "kube-proxy-rpjg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:22.077657  508852 pod_ready.go:94] pod "kube-proxy-rpjg8" is "Ready"
	I1227 10:30:22.077687  508852 pod_ready.go:86] duration metric: took 400.481889ms for pod "kube-proxy-rpjg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:22.276738  508852 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:22.677865  508852 pod_ready.go:94] pod "kube-scheduler-embed-certs-367691" is "Ready"
	I1227 10:30:22.677898  508852 pod_ready.go:86] duration metric: took 401.131611ms for pod "kube-scheduler-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:22.677911  508852 pod_ready.go:40] duration metric: took 33.968861104s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:30:22.765162  508852 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:30:22.768504  508852 out.go:203] 
	W1227 10:30:22.771321  508852 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:30:22.773991  508852 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:30:22.776865  508852 out.go:179] * Done! kubectl is now configured to use "embed-certs-367691" cluster and "default" namespace by default
	I1227 10:30:20.772330  512231 out.go:252] * Restarting existing docker container for "no-preload-241090" ...
	I1227 10:30:20.772428  512231 cli_runner.go:164] Run: docker start no-preload-241090
	I1227 10:30:21.042721  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:21.067474  512231 kic.go:430] container "no-preload-241090" state is running.
	I1227 10:30:21.067854  512231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241090
	I1227 10:30:21.094104  512231 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/config.json ...
	I1227 10:30:21.094338  512231 machine.go:94] provisionDockerMachine start ...
	I1227 10:30:21.094395  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:21.120355  512231 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:21.120745  512231 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 10:30:21.120755  512231 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:30:21.124005  512231 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56298->127.0.0.1:33443: read: connection reset by peer
	I1227 10:30:24.267855  512231 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-241090
	
	I1227 10:30:24.267880  512231 ubuntu.go:182] provisioning hostname "no-preload-241090"
	I1227 10:30:24.267947  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:24.286053  512231 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:24.286386  512231 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 10:30:24.286404  512231 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-241090 && echo "no-preload-241090" | sudo tee /etc/hostname
	I1227 10:30:24.433517  512231 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-241090
	
	I1227 10:30:24.433624  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:24.452304  512231 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:24.452632  512231 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 10:30:24.452655  512231 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241090' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241090/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241090' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:30:24.596371  512231 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:30:24.596397  512231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:30:24.596428  512231 ubuntu.go:190] setting up certificates
	I1227 10:30:24.596446  512231 provision.go:84] configureAuth start
	I1227 10:30:24.596507  512231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241090
	I1227 10:30:24.614716  512231 provision.go:143] copyHostCerts
	I1227 10:30:24.614786  512231 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:30:24.614811  512231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:30:24.614893  512231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:30:24.615015  512231 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:30:24.615026  512231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:30:24.615060  512231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:30:24.615126  512231 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:30:24.615135  512231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:30:24.615159  512231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:30:24.615221  512231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.no-preload-241090 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-241090]
	I1227 10:30:25.121062  512231 provision.go:177] copyRemoteCerts
	I1227 10:30:25.121143  512231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:30:25.121203  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:25.142688  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:25.246221  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:30:25.267893  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:30:25.287599  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:30:25.306338  512231 provision.go:87] duration metric: took 709.866125ms to configureAuth
	I1227 10:30:25.306369  512231 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:30:25.306611  512231 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:25.306731  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:25.324991  512231 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:25.325310  512231 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 10:30:25.325331  512231 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:30:25.706290  512231 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:30:25.706361  512231 machine.go:97] duration metric: took 4.612012852s to provisionDockerMachine
	I1227 10:30:25.706379  512231 start.go:293] postStartSetup for "no-preload-241090" (driver="docker")
	I1227 10:30:25.706391  512231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:30:25.706464  512231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:30:25.706507  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:25.729016  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:25.832429  512231 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:30:25.836180  512231 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:30:25.836210  512231 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:30:25.836241  512231 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:30:25.836320  512231 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:30:25.836440  512231 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:30:25.836551  512231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:30:25.844441  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:30:25.862491  512231 start.go:296] duration metric: took 156.081264ms for postStartSetup
	I1227 10:30:25.862600  512231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:30:25.862661  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:25.879617  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:25.977232  512231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:30:25.982365  512231 fix.go:56] duration metric: took 5.230888252s for fixHost
	I1227 10:30:25.982396  512231 start.go:83] releasing machines lock for "no-preload-241090", held for 5.230949857s
	I1227 10:30:25.982476  512231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241090
	I1227 10:30:25.999702  512231 ssh_runner.go:195] Run: cat /version.json
	I1227 10:30:25.999763  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:26.000061  512231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:30:26.000139  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:26.026270  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:26.026860  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:26.212864  512231 ssh_runner.go:195] Run: systemctl --version
	I1227 10:30:26.221348  512231 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:30:26.260440  512231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:30:26.265876  512231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:30:26.265962  512231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:30:26.277955  512231 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:30:26.277988  512231 start.go:496] detecting cgroup driver to use...
	I1227 10:30:26.278041  512231 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:30:26.278110  512231 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:30:26.293678  512231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:30:26.307120  512231 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:30:26.307208  512231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:30:26.322935  512231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:30:26.337140  512231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:30:26.452292  512231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:30:26.578890  512231 docker.go:234] disabling docker service ...
	I1227 10:30:26.579008  512231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:30:26.595076  512231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:30:26.609182  512231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:30:26.739046  512231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:30:26.863172  512231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:30:26.878046  512231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:30:26.893223  512231 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:30:26.893304  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.902109  512231 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:30:26.902180  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.911470  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.921851  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.933350  512231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:30:26.941800  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.950834  512231 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.959507  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.968557  512231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:30:26.976567  512231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:30:26.984304  512231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:30:27.099199  512231 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:30:27.278616  512231 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:30:27.278784  512231 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:30:27.283009  512231 start.go:574] Will wait 60s for crictl version
	I1227 10:30:27.283136  512231 ssh_runner.go:195] Run: which crictl
	I1227 10:30:27.287616  512231 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:30:27.318244  512231 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:30:27.318419  512231 ssh_runner.go:195] Run: crio --version
	I1227 10:30:27.350422  512231 ssh_runner.go:195] Run: crio --version
	I1227 10:30:27.384507  512231 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:30:27.387309  512231 cli_runner.go:164] Run: docker network inspect no-preload-241090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:30:27.404596  512231 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:30:27.408625  512231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:30:27.418706  512231 kubeadm.go:884] updating cluster {Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:30:27.418833  512231 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:30:27.418876  512231 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:30:27.455960  512231 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:30:27.456034  512231 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:30:27.456049  512231 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 10:30:27.456146  512231 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-241090 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:30:27.456238  512231 ssh_runner.go:195] Run: crio config
	I1227 10:30:27.529993  512231 cni.go:84] Creating CNI manager for ""
	I1227 10:30:27.530069  512231 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:30:27.530104  512231 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:30:27.530161  512231 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241090 NodeName:no-preload-241090 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:30:27.530360  512231 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-241090"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:30:27.530478  512231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:30:27.538859  512231 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:30:27.538947  512231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:30:27.546624  512231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 10:30:27.567107  512231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:30:27.580261  512231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1227 10:30:27.592953  512231 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:30:27.596686  512231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:30:27.606875  512231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:30:27.724222  512231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:30:27.744775  512231 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090 for IP: 192.168.85.2
	I1227 10:30:27.744839  512231 certs.go:195] generating shared ca certs ...
	I1227 10:30:27.744871  512231 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:30:27.745049  512231 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:30:27.745119  512231 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:30:27.745142  512231 certs.go:257] generating profile certs ...
	I1227 10:30:27.745277  512231 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/client.key
	I1227 10:30:27.745398  512231 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/apiserver.key.a9feda9d
	I1227 10:30:27.745468  512231 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/proxy-client.key
	I1227 10:30:27.745615  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:30:27.745691  512231 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:30:27.745728  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:30:27.745782  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:30:27.745842  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:30:27.745894  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:30:27.745983  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:30:27.746617  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:30:27.771695  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:30:27.791775  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:30:27.811336  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:30:27.831040  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 10:30:27.850178  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:30:27.868794  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:30:27.895502  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:30:27.914299  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:30:27.941773  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:30:27.962611  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:30:27.986709  512231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:30:28.005697  512231 ssh_runner.go:195] Run: openssl version
	I1227 10:30:28.013175  512231 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:30:28.023009  512231 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:30:28.031569  512231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:30:28.036263  512231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:30:28.036384  512231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:30:28.079486  512231 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:30:28.089682  512231 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:30:28.099311  512231 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:30:28.107776  512231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:30:28.111676  512231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:30:28.111738  512231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:30:28.153057  512231 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:30:28.160905  512231 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:30:28.170003  512231 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:30:28.179216  512231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:30:28.183251  512231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:30:28.183330  512231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:30:28.225148  512231 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:30:28.233705  512231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:30:28.237991  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:30:28.282700  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:30:28.340982  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:30:28.397474  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:30:28.490293  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:30:28.567982  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:30:28.623176  512231 kubeadm.go:401] StartCluster: {Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:30:28.623266  512231 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:30:28.623338  512231 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:30:28.690520  512231 cri.go:96] found id: "0be2bd393e285cb49c8e5b5f66063ce6781e934558ad30c47aa3aec488565ab9"
	I1227 10:30:28.690547  512231 cri.go:96] found id: "5ef714a1055a6cf93a2f1f0f649e4d4fa6f789af9150c2755a1c2d09b53037b1"
	I1227 10:30:28.690553  512231 cri.go:96] found id: "4264015374f91b531af599acfc367aa072b442eccc1ffead423255914a0d9f09"
	I1227 10:30:28.690557  512231 cri.go:96] found id: "96e2bc84c864d4d7cc89f0f2517101b59c5cc5096c04209185554cf59b742f37"
	I1227 10:30:28.690561  512231 cri.go:96] found id: ""
	I1227 10:30:28.690613  512231 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:30:28.722579  512231 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:30:28Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:30:28.722659  512231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:30:28.737407  512231 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:30:28.737497  512231 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:30:28.737588  512231 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:30:28.757588  512231 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:30:28.758521  512231 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-241090" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:30:28.759210  512231 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-241090" cluster setting kubeconfig missing "no-preload-241090" context setting]
	I1227 10:30:28.760186  512231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:30:28.762418  512231 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:30:28.778475  512231 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 10:30:28.778575  512231 kubeadm.go:602] duration metric: took 41.046513ms to restartPrimaryControlPlane
	I1227 10:30:28.778611  512231 kubeadm.go:403] duration metric: took 155.443736ms to StartCluster
	I1227 10:30:28.778680  512231 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:30:28.778793  512231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:30:28.780588  512231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:30:28.781253  512231 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:30:28.781631  512231 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:28.781656  512231 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:30:28.781841  512231 addons.go:70] Setting storage-provisioner=true in profile "no-preload-241090"
	I1227 10:30:28.781859  512231 addons.go:239] Setting addon storage-provisioner=true in "no-preload-241090"
	W1227 10:30:28.781866  512231 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:30:28.781893  512231 host.go:66] Checking if "no-preload-241090" exists ...
	I1227 10:30:28.782352  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:28.782779  512231 addons.go:70] Setting default-storageclass=true in profile "no-preload-241090"
	I1227 10:30:28.782842  512231 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-241090"
	I1227 10:30:28.782968  512231 addons.go:70] Setting dashboard=true in profile "no-preload-241090"
	I1227 10:30:28.783000  512231 addons.go:239] Setting addon dashboard=true in "no-preload-241090"
	W1227 10:30:28.783010  512231 addons.go:248] addon dashboard should already be in state true
	I1227 10:30:28.783032  512231 host.go:66] Checking if "no-preload-241090" exists ...
	I1227 10:30:28.783329  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:28.783514  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:28.787200  512231 out.go:179] * Verifying Kubernetes components...
	I1227 10:30:28.791024  512231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:30:28.855796  512231 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:30:28.867877  512231 addons.go:239] Setting addon default-storageclass=true in "no-preload-241090"
	W1227 10:30:28.867900  512231 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:30:28.867929  512231 host.go:66] Checking if "no-preload-241090" exists ...
	I1227 10:30:28.868377  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:28.868563  512231 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:30:28.868586  512231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:30:28.868629  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:28.884138  512231 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:30:28.887181  512231 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:30:28.890573  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:30:28.890603  512231 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:30:28.890694  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:28.911513  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:28.921438  512231 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:30:28.921470  512231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:30:28.921536  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:28.964989  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:28.964989  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:29.193255  512231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:30:29.233767  512231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:30:29.243280  512231 node_ready.go:35] waiting up to 6m0s for node "no-preload-241090" to be "Ready" ...
	I1227 10:30:29.250394  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:30:29.250576  512231 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:30:29.300583  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:30:29.300682  512231 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:30:29.365244  512231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:30:29.369819  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:30:29.369907  512231 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:30:29.465499  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:30:29.465533  512231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:30:29.550590  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:30:29.550614  512231 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:30:29.633168  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:30:29.633193  512231 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:30:29.662324  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:30:29.662430  512231 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:30:29.690834  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:30:29.690953  512231 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:30:29.719121  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:30:29.719209  512231 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:30:29.745996  512231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:30:32.307044  512231 node_ready.go:49] node "no-preload-241090" is "Ready"
	I1227 10:30:32.307078  512231 node_ready.go:38] duration metric: took 3.063619964s for node "no-preload-241090" to be "Ready" ...
	I1227 10:30:32.307093  512231 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:30:32.307159  512231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:30:34.069876  512231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.704516105s)
	I1227 10:30:34.070202  512231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.324104161s)
	I1227 10:30:34.070417  512231 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.763241432s)
	I1227 10:30:34.070461  512231 api_server.go:72] duration metric: took 5.289146285s to wait for apiserver process to appear ...
	I1227 10:30:34.070469  512231 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:30:34.070486  512231 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 10:30:34.071520  512231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.8377211s)
	I1227 10:30:34.073765  512231 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-241090 addons enable metrics-server
	
	I1227 10:30:34.089159  512231 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 10:30:34.091681  512231 api_server.go:141] control plane version: v1.35.0
	I1227 10:30:34.091722  512231 api_server.go:131] duration metric: took 21.246255ms to wait for apiserver health ...
	I1227 10:30:34.091736  512231 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:30:34.105973  512231 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 10:30:34.107696  512231 system_pods.go:59] 8 kube-system pods found
	I1227 10:30:34.107744  512231 system_pods.go:61] "coredns-7d764666f9-5p545" [0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:30:34.107753  512231 system_pods.go:61] "etcd-no-preload-241090" [835a968b-a507-4885-a74d-434ece70fa72] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:30:34.107789  512231 system_pods.go:61] "kindnet-jh987" [6cbce1aa-237d-42fa-bc32-dde8b72f3668] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:30:34.107805  512231 system_pods.go:61] "kube-apiserver-no-preload-241090" [e5c18f64-1c76-496a-8dd0-b5cbcfffefb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:30:34.107813  512231 system_pods.go:61] "kube-controller-manager-no-preload-241090" [12e95943-625c-4a69-aeff-d4364483de48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:30:34.107836  512231 system_pods.go:61] "kube-proxy-8xv88" [ffe92c3b-92ca-41f8-91a8-2c0983689068] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:30:34.107869  512231 system_pods.go:61] "kube-scheduler-no-preload-241090" [55ff2824-5114-426e-a833-df3be58eee18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:30:34.107880  512231 system_pods.go:61] "storage-provisioner" [4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf] Running
	I1227 10:30:34.107887  512231 system_pods.go:74] duration metric: took 16.14569ms to wait for pod list to return data ...
	I1227 10:30:34.107902  512231 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:30:34.109021  512231 addons.go:530] duration metric: took 5.327366858s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 10:30:34.121053  512231 default_sa.go:45] found service account: "default"
	I1227 10:30:34.121079  512231 default_sa.go:55] duration metric: took 13.17039ms for default service account to be created ...
	I1227 10:30:34.121089  512231 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:30:34.130418  512231 system_pods.go:86] 8 kube-system pods found
	I1227 10:30:34.130454  512231 system_pods.go:89] "coredns-7d764666f9-5p545" [0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:30:34.130464  512231 system_pods.go:89] "etcd-no-preload-241090" [835a968b-a507-4885-a74d-434ece70fa72] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:30:34.130474  512231 system_pods.go:89] "kindnet-jh987" [6cbce1aa-237d-42fa-bc32-dde8b72f3668] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:30:34.130487  512231 system_pods.go:89] "kube-apiserver-no-preload-241090" [e5c18f64-1c76-496a-8dd0-b5cbcfffefb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:30:34.130494  512231 system_pods.go:89] "kube-controller-manager-no-preload-241090" [12e95943-625c-4a69-aeff-d4364483de48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:30:34.130502  512231 system_pods.go:89] "kube-proxy-8xv88" [ffe92c3b-92ca-41f8-91a8-2c0983689068] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:30:34.130513  512231 system_pods.go:89] "kube-scheduler-no-preload-241090" [55ff2824-5114-426e-a833-df3be58eee18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:30:34.130518  512231 system_pods.go:89] "storage-provisioner" [4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf] Running
	I1227 10:30:34.130526  512231 system_pods.go:126] duration metric: took 9.431209ms to wait for k8s-apps to be running ...
	I1227 10:30:34.130533  512231 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:30:34.130589  512231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:30:34.148404  512231 system_svc.go:56] duration metric: took 17.860136ms WaitForService to wait for kubelet
	I1227 10:30:34.148431  512231 kubeadm.go:587] duration metric: took 5.36711424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:30:34.148456  512231 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:30:34.159277  512231 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:30:34.159308  512231 node_conditions.go:123] node cpu capacity is 2
	I1227 10:30:34.159322  512231 node_conditions.go:105] duration metric: took 10.860779ms to run NodePressure ...
	I1227 10:30:34.159336  512231 start.go:242] waiting for startup goroutines ...
	I1227 10:30:34.159343  512231 start.go:247] waiting for cluster config update ...
	I1227 10:30:34.159354  512231 start.go:256] writing updated cluster config ...
	I1227 10:30:34.159628  512231 ssh_runner.go:195] Run: rm -f paused
	I1227 10:30:34.164216  512231 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:30:34.168369  512231 pod_ready.go:83] waiting for pod "coredns-7d764666f9-5p545" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.336778427Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.343014478Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.343181372Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.343285192Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.360401452Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.360577528Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.36065889Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.372331633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.372496081Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.372568927Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.380386336Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.380559056Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.482745838Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=eae0753a-76f8-44db-8303-00daa7397fb7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.485350476Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ffd455f0-2025-41c3-97e4-36b98daedf07 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.487898793Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt/dashboard-metrics-scraper" id=31fd7413-8ff7-44e9-bbcf-3a7356a35b4b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.488098491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.499506246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.500490314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.529843202Z" level=info msg="Created container d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt/dashboard-metrics-scraper" id=31fd7413-8ff7-44e9-bbcf-3a7356a35b4b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.531096261Z" level=info msg="Starting container: d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906" id=39ae6d3e-a8c0-436f-8ef3-17330159a43c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.536011462Z" level=info msg="Started container" PID=1743 containerID=d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt/dashboard-metrics-scraper id=39ae6d3e-a8c0-436f-8ef3-17330159a43c name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e905e033fb00c3da0705e22b29c7e3ab63606db1b08b5e9287ecfd0333a50e8
	Dec 27 10:30:33 embed-certs-367691 conmon[1741]: conmon d3e7508c9848f2bffdb6 <ninfo>: container 1743 exited with status 1
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.692350239Z" level=info msg="Removing container: 614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad" id=9561ba05-e0df-4702-a01d-128e1e80625a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.710609468Z" level=info msg="Error loading conmon cgroup of container 614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad: cgroup deleted" id=9561ba05-e0df-4702-a01d-128e1e80625a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.716526082Z" level=info msg="Removed container 614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt/dashboard-metrics-scraper" id=9561ba05-e0df-4702-a01d-128e1e80625a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d3e7508c9848f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   2e905e033fb00       dashboard-metrics-scraper-867fb5f87b-vp5vt   kubernetes-dashboard
	ea8f6e14e8171       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   6e72752f3bb39       storage-provisioner                          kube-system
	f2f8c2294cb00       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   57f14e7873b7c       kubernetes-dashboard-b84665fb8-27bs2         kubernetes-dashboard
	e7424ec6139f7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   a4a69001335f2       busybox                                      default
	7da2abbec178f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   6e72752f3bb39       storage-provisioner                          kube-system
	ee29ffd6fc0fe       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           51 seconds ago      Running             kube-proxy                  1                   6dbcf42befb8c       kube-proxy-rpjg8                             kube-system
	40795bfae7ba1       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           51 seconds ago      Running             coredns                     1                   87c3aea365a3a       coredns-7d764666f9-t88nq                     kube-system
	8cd86734bcd50       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           51 seconds ago      Running             kindnet-cni                 1                   a86687f05ed71       kindnet-8pr87                                kube-system
	7c5d69297d877       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           56 seconds ago      Running             kube-apiserver              1                   66e7d464c9f90       kube-apiserver-embed-certs-367691            kube-system
	56a9c2fee9e20       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           56 seconds ago      Running             kube-controller-manager     1                   16a24cb0da0db       kube-controller-manager-embed-certs-367691   kube-system
	a108df6f898b1       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           56 seconds ago      Running             kube-scheduler              1                   0ee1b84cbef98       kube-scheduler-embed-certs-367691            kube-system
	8c4fb8d9010ff       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           56 seconds ago      Running             etcd                        1                   6f6775a59872d       etcd-embed-certs-367691                      kube-system
	
	
	==> coredns [40795bfae7ba1aa8d709cee7fd131cac1b3e5cf4104406b7db981778a9131eb0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53628 - 13084 "HINFO IN 6049901764671328879.6829452539217477503. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004068119s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-367691
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-367691
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=embed-certs-367691
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_28_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:28:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-367691
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:30:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:30:17 +0000   Sat, 27 Dec 2025 10:28:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:30:17 +0000   Sat, 27 Dec 2025 10:28:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:30:17 +0000   Sat, 27 Dec 2025 10:28:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:30:17 +0000   Sat, 27 Dec 2025 10:29:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-367691
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                220a60ff-ddbf-4af6-ab3b-b3aec69cd7bb
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-t88nq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-embed-certs-367691                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         112s
	  kube-system                 kindnet-8pr87                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-367691             250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-367691    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-rpjg8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-367691             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-vp5vt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-27bs2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node embed-certs-367691 event: Registered Node embed-certs-367691 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node embed-certs-367691 event: Registered Node embed-certs-367691 in Controller
	
	
	==> dmesg <==
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	[Dec27 10:28] overlayfs: idmapped layers are currently not supported
	[Dec27 10:29] overlayfs: idmapped layers are currently not supported
	[ +34.978626] overlayfs: idmapped layers are currently not supported
	[Dec27 10:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8c4fb8d9010ff30eec94b0fbcdd2a5948b473223b17ca2f2a4b0ce18bedff071] <==
	{"level":"info","ts":"2025-12-27T10:29:43.173846Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:29:43.173855Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:29:43.174029Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:29:43.174039Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:29:43.195523Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T10:29:43.195625Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:29:43.195693Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:29:43.538162Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:29:43.540446Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:29:43.540511Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:29:43.540524Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:29:43.540539Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:29:43.544919Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:29:43.544975Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:29:43.545004Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:29:43.545635Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:29:43.549114Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-367691 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:29:43.549235Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:29:43.550134Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:29:43.557309Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:29:43.563082Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:29:43.565563Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:29:43.565600Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:29:43.581359Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:29:43.582103Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:30:39 up  2:13,  0 user,  load average: 5.16, 2.89, 2.27
	Linux embed-certs-367691 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cd86734bcd509d8f341a879f4a8b5dd15f4639c8986a100aec8b8c61e2c100f] <==
	I1227 10:29:48.036386       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:29:48.037019       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:29:48.037257       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:29:48.037451       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:29:48.037533       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:29:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:29:48.321121       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:29:48.321722       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:29:48.321831       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:29:48.322202       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:30:18.324046       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:30:18.324051       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:30:18.324119       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 10:30:18.324253       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 10:30:19.822210       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:30:19.822255       1 metrics.go:72] Registering metrics
	I1227 10:30:19.822344       1 controller.go:711] "Syncing nftables rules"
	I1227 10:30:28.324050       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:30:28.324763       1 main.go:301] handling current node
	I1227 10:30:38.321081       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:30:38.321129       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7c5d69297d8771e299ca8b09a7cae96c2e7c5f87879fd1c567112742214e35f3] <==
	I1227 10:29:46.748922       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:29:46.787150       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:29:46.801288       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:46.807436       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:29:46.807951       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:29:46.813891       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 10:29:46.813986       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:46.830960       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:29:46.840097       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 10:29:46.840368       1 aggregator.go:187] initial CRD sync complete...
	I1227 10:29:46.840383       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 10:29:46.840390       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:29:46.840397       1 cache.go:39] Caches are synced for autoregister controller
	E1227 10:29:46.875707       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:29:47.322680       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:29:47.534688       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:29:47.608685       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:29:47.815351       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:29:47.874448       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:29:47.915570       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:29:48.068553       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.248.207"}
	I1227 10:29:48.099309       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.189.29"}
	I1227 10:29:50.095360       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:29:50.197764       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:29:50.246824       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [56a9c2fee9e20cf978b01d8726c038b4c28e466158c9a28f5f3fdc75e851a27d] <==
	I1227 10:29:49.725400       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.725414       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.725459       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.725494       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.729713       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.729801       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.729860       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 10:29:49.729932       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-367691"
	I1227 10:29:49.729980       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 10:29:49.730008       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.732836       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.732874       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.732912       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.732959       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.733057       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.733568       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.733618       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:29:49.733639       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:29:49.733649       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:29:49.733668       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.771238       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.804137       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.823895       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.823921       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:29:49.823927       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [ee29ffd6fc0fea1a7798eaff2aac02990f393036d328855d43f123ce95af833f] <==
	I1227 10:29:48.260865       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:29:48.357840       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:29:48.458797       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:48.458832       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:29:48.458908       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:29:48.477835       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:29:48.477898       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:29:48.483582       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:29:48.484335       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:29:48.484400       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:29:48.491113       1 config.go:200] "Starting service config controller"
	I1227 10:29:48.491689       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:29:48.491772       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:29:48.491802       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:29:48.491842       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:29:48.491870       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:29:48.495041       1 config.go:309] "Starting node config controller"
	I1227 10:29:48.498558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:29:48.498648       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:29:48.592804       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:29:48.592841       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:29:48.593421       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a108df6f898b109467fab72294c0412641c1e5d2b2ea82f9edf2b1b962883dcf] <==
	I1227 10:29:44.405999       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:29:46.536060       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:29:46.536112       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:29:46.536133       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:29:46.536144       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:29:46.728159       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:29:46.728185       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:29:46.763746       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:29:46.766949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:29:46.767010       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:29:46.768721       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:29:46.871231       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:30:03 embed-certs-367691 kubelet[787]: I1227 10:30:03.990159     787 scope.go:122] "RemoveContainer" containerID="8a012634efd3c64567508c28055ef95dab2cb004621f832697633648ac76ec2a"
	Dec 27 10:30:03 embed-certs-367691 kubelet[787]: E1227 10:30:03.990340     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vp5vt_kubernetes-dashboard(f7fb8eda-d289-40b2-a424-a3b818cac4ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" podUID="f7fb8eda-d289-40b2-a424-a3b818cac4ad"
	Dec 27 10:30:06 embed-certs-367691 kubelet[787]: E1227 10:30:06.481298     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:06 embed-certs-367691 kubelet[787]: I1227 10:30:06.483516     787 scope.go:122] "RemoveContainer" containerID="8a012634efd3c64567508c28055ef95dab2cb004621f832697633648ac76ec2a"
	Dec 27 10:30:06 embed-certs-367691 kubelet[787]: I1227 10:30:06.616881     787 scope.go:122] "RemoveContainer" containerID="8a012634efd3c64567508c28055ef95dab2cb004621f832697633648ac76ec2a"
	Dec 27 10:30:07 embed-certs-367691 kubelet[787]: E1227 10:30:07.621519     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:07 embed-certs-367691 kubelet[787]: I1227 10:30:07.621561     787 scope.go:122] "RemoveContainer" containerID="614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad"
	Dec 27 10:30:07 embed-certs-367691 kubelet[787]: E1227 10:30:07.621714     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vp5vt_kubernetes-dashboard(f7fb8eda-d289-40b2-a424-a3b818cac4ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" podUID="f7fb8eda-d289-40b2-a424-a3b818cac4ad"
	Dec 27 10:30:13 embed-certs-367691 kubelet[787]: E1227 10:30:13.990168     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:13 embed-certs-367691 kubelet[787]: I1227 10:30:13.990232     787 scope.go:122] "RemoveContainer" containerID="614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad"
	Dec 27 10:30:13 embed-certs-367691 kubelet[787]: E1227 10:30:13.990444     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vp5vt_kubernetes-dashboard(f7fb8eda-d289-40b2-a424-a3b818cac4ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" podUID="f7fb8eda-d289-40b2-a424-a3b818cac4ad"
	Dec 27 10:30:18 embed-certs-367691 kubelet[787]: I1227 10:30:18.649676     787 scope.go:122] "RemoveContainer" containerID="7da2abbec178f9d68b05cbeba7d3f44de1cc48a998eac2bb77b106a98e4f6efb"
	Dec 27 10:30:21 embed-certs-367691 kubelet[787]: E1227 10:30:21.121215     787 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-t88nq" containerName="coredns"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: E1227 10:30:33.481487     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: I1227 10:30:33.482016     787 scope.go:122] "RemoveContainer" containerID="614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: I1227 10:30:33.689612     787 scope.go:122] "RemoveContainer" containerID="614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: E1227 10:30:33.689951     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: I1227 10:30:33.689991     787 scope.go:122] "RemoveContainer" containerID="d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: E1227 10:30:33.690213     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vp5vt_kubernetes-dashboard(f7fb8eda-d289-40b2-a424-a3b818cac4ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" podUID="f7fb8eda-d289-40b2-a424-a3b818cac4ad"
	Dec 27 10:30:34 embed-certs-367691 kubelet[787]: E1227 10:30:34.693647     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:34 embed-certs-367691 kubelet[787]: I1227 10:30:34.693686     787 scope.go:122] "RemoveContainer" containerID="d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906"
	Dec 27 10:30:34 embed-certs-367691 kubelet[787]: E1227 10:30:34.693830     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vp5vt_kubernetes-dashboard(f7fb8eda-d289-40b2-a424-a3b818cac4ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" podUID="f7fb8eda-d289-40b2-a424-a3b818cac4ad"
	Dec 27 10:30:36 embed-certs-367691 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:30:36 embed-certs-367691 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:30:36 embed-certs-367691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f2f8c2294cb00b78dfda28e1049259ceffb42f141d0b7661ada337ade587fa06] <==
	2025/12/27 10:30:01 Using namespace: kubernetes-dashboard
	2025/12/27 10:30:01 Using in-cluster config to connect to apiserver
	2025/12/27 10:30:01 Using secret token for csrf signing
	2025/12/27 10:30:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:30:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:30:01 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:30:01 Generating JWE encryption key
	2025/12/27 10:30:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:30:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:30:02 Initializing JWE encryption key from synchronized object
	2025/12/27 10:30:02 Creating in-cluster Sidecar client
	2025/12/27 10:30:02 Serving insecurely on HTTP port: 9090
	2025/12/27 10:30:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:30:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:30:01 Starting overwatch
	
	
	==> storage-provisioner [7da2abbec178f9d68b05cbeba7d3f44de1cc48a998eac2bb77b106a98e4f6efb] <==
	I1227 10:29:48.194004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:30:18.196149       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ea8f6e14e817184fe51d0f5957f0409363ca444db8803a84523e0966ea183e7f] <==
	I1227 10:30:18.698167       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:30:18.713184       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:30:18.713328       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:30:18.716431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:22.171189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:26.431650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:30.030613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:33.083983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:36.106369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:36.115100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:30:36.115826       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:30:36.116106       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-367691_7b0c10ca-690e-454c-af46-090e71292d00!
	I1227 10:30:36.117415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41e16e59-3f7f-429b-a593-eb5c08bee361", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-367691_7b0c10ca-690e-454c-af46-090e71292d00 became leader
	W1227 10:30:36.126630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:36.155349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:30:36.227454       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-367691_7b0c10ca-690e-454c-af46-090e71292d00!
	W1227 10:30:38.158548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:38.177222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-367691 -n embed-certs-367691
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-367691 -n embed-certs-367691: exit status 2 (583.264885ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-367691 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-367691
helpers_test.go:244: (dbg) docker inspect embed-certs-367691:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857",
	        "Created": "2025-12-27T10:28:25.951096938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 508988,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:29:34.236338146Z",
	            "FinishedAt": "2025-12-27T10:29:33.279104768Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/hostname",
	        "HostsPath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/hosts",
	        "LogPath": "/var/lib/docker/containers/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857/d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857-json.log",
	        "Name": "/embed-certs-367691",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-367691:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-367691",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d75458839d4b801e92e42c32aa3826c4145ca40a7c7d66c653b150f55cc36857",
	                "LowerDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b85d5810c00e6c8095e99d780709fb5152e0679becc06d20328758b0ba5c299d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-367691",
	                "Source": "/var/lib/docker/volumes/embed-certs-367691/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-367691",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-367691",
	                "name.minikube.sigs.k8s.io": "embed-certs-367691",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9153ae2bf1ae226b1a6dc45857fdc150d1d90d17b4fefc387f4edfd98dddeb66",
	            "SandboxKey": "/var/run/docker/netns/9153ae2bf1ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-367691": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:8c:b3:5a:d8:6f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d03ce9bfd46e85bbc9765f774251ba284121a67953c86059ad99286cf88212c",
	                    "EndpointID": "673ef0b4beaf14d5aea1880f9d5f46f18ce3288841ad8417515c29249fb12005",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-367691",
	                        "d75458839d4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-367691 -n embed-certs-367691
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-367691 -n embed-certs-367691: exit status 2 (471.169904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-367691 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-367691 logs -n 25: (1.735943289s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-482317                                                                                                                                                │ old-k8s-version-482317       │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:26 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-784377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-784377 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:26 UTC │ 27 Dec 25 10:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-784377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ image   │ default-k8s-diff-port-784377 image list --format=json                                                                                                                    │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ pause   │ -p default-k8s-diff-port-784377 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                          │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                          │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ ssh     │ force-systemd-flag-915850 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p force-systemd-flag-915850                                                                                                                                             │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p disable-driver-mounts-913868                                                                                                                                          │ disable-driver-mounts-913868 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable metrics-server -p embed-certs-367691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │                     │
	│ stop    │ -p embed-certs-367691 --alsologtostderr -v=3                                                                                                                             │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable dashboard -p embed-certs-367691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable metrics-server -p no-preload-241090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ stop    │ -p no-preload-241090 --alsologtostderr -v=3                                                                                                                              │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable dashboard -p no-preload-241090 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ image   │ embed-certs-367691 image list --format=json                                                                                                                              │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ pause   │ -p embed-certs-367691 --alsologtostderr -v=1                                                                                                                             │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:30:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:30:20.538352  512231 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:30:20.538617  512231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:30:20.538649  512231 out.go:374] Setting ErrFile to fd 2...
	I1227 10:30:20.538708  512231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:30:20.539026  512231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:30:20.539526  512231 out.go:368] Setting JSON to false
	I1227 10:30:20.540810  512231 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7974,"bootTime":1766823447,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:30:20.540917  512231 start.go:143] virtualization:  
	I1227 10:30:20.543921  512231 out.go:179] * [no-preload-241090] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:30:20.548091  512231 notify.go:221] Checking for updates...
	I1227 10:30:20.548113  512231 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:30:20.552128  512231 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:30:20.555152  512231 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:30:20.558125  512231 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:30:20.561672  512231 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:30:20.564736  512231 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:30:20.568225  512231 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:20.568825  512231 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:30:20.590086  512231 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:30:20.590280  512231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:30:20.651335  512231 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:30:20.641524756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:30:20.651449  512231 docker.go:319] overlay module found
	I1227 10:30:20.655120  512231 out.go:179] * Using the docker driver based on existing profile
	I1227 10:30:20.658033  512231 start.go:309] selected driver: docker
	I1227 10:30:20.658061  512231 start.go:928] validating driver "docker" against &{Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:30:20.658182  512231 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:30:20.658941  512231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:30:20.716221  512231 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:30:20.705971211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:30:20.716563  512231 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:30:20.716601  512231 cni.go:84] Creating CNI manager for ""
	I1227 10:30:20.716665  512231 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:30:20.716707  512231 start.go:353] cluster config:
	{Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:30:20.720072  512231 out.go:179] * Starting "no-preload-241090" primary control-plane node in "no-preload-241090" cluster
	I1227 10:30:20.723010  512231 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:30:20.726057  512231 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:30:20.728885  512231 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:30:20.728972  512231 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:30:20.729050  512231 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/config.json ...
	I1227 10:30:20.729336  512231 cache.go:107] acquiring lock: {Name:mk20c624f37c3909dde5a8d589ecabaa6d57d038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.729473  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1227 10:30:20.729501  512231 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 172.137µs
	I1227 10:30:20.729533  512231 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1227 10:30:20.729564  512231 cache.go:107] acquiring lock: {Name:mkbb24fa4343d0a35603cb19aa6239dff4f2f276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.729621  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1227 10:30:20.729649  512231 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 86.91µs
	I1227 10:30:20.729671  512231 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1227 10:30:20.729697  512231 cache.go:107] acquiring lock: {Name:mk4c45856071606c8af5d7273166a2f1bb9ddc55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.729747  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1227 10:30:20.729775  512231 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 78.991µs
	I1227 10:30:20.729796  512231 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1227 10:30:20.729836  512231 cache.go:107] acquiring lock: {Name:mkf9b1edb58a976305f282f57eeb11e80f0b7bb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.729929  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1227 10:30:20.729953  512231 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 130.84µs
	I1227 10:30:20.729996  512231 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1227 10:30:20.730025  512231 cache.go:107] acquiring lock: {Name:mkf98c62b88cf915fe929ba90cd6ed029cecc870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.730079  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1227 10:30:20.730112  512231 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 84.325µs
	I1227 10:30:20.730134  512231 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1227 10:30:20.730159  512231 cache.go:107] acquiring lock: {Name:mka12fccf8e2bbc0ccc499614d0ccb8a211e1cb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.730209  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1227 10:30:20.730229  512231 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 71.049µs
	I1227 10:30:20.730253  512231 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1227 10:30:20.730281  512231 cache.go:107] acquiring lock: {Name:mk2a8f120e089d53474aed758c34eb39d391985d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.730329  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1227 10:30:20.730349  512231 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 72.912µs
	I1227 10:30:20.730369  512231 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1227 10:30:20.730400  512231 cache.go:107] acquiring lock: {Name:mk262c37486fa86829e275f8385c93b0718c0ef2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.730456  512231 cache.go:115] /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1227 10:30:20.730476  512231 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 78.573µs
	I1227 10:30:20.730501  512231 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1227 10:30:20.730524  512231 cache.go:87] Successfully saved all images to host disk.
	I1227 10:30:20.751280  512231 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:30:20.751306  512231 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:30:20.751330  512231 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:30:20.751364  512231 start.go:360] acquireMachinesLock for no-preload-241090: {Name:mk51902d6c01d44d9c13da3d668b0d82e1b30c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:20.751434  512231 start.go:364] duration metric: took 48.288µs to acquireMachinesLock for "no-preload-241090"
	I1227 10:30:20.751458  512231 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:30:20.751469  512231 fix.go:54] fixHost starting: 
	I1227 10:30:20.751760  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:20.768965  512231 fix.go:112] recreateIfNeeded on no-preload-241090: state=Stopped err=<nil>
	W1227 10:30:20.768996  512231 fix.go:138] unexpected machine state, will restart: <nil>
	W1227 10:30:19.779235  508852 pod_ready.go:104] pod "coredns-7d764666f9-t88nq" is not "Ready", error: <nil>
	I1227 10:30:21.280352  508852 pod_ready.go:94] pod "coredns-7d764666f9-t88nq" is "Ready"
	I1227 10:30:21.280378  508852 pod_ready.go:86] duration metric: took 32.507176806s for pod "coredns-7d764666f9-t88nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.285031  508852 pod_ready.go:83] waiting for pod "etcd-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.291920  508852 pod_ready.go:94] pod "etcd-embed-certs-367691" is "Ready"
	I1227 10:30:21.292024  508852 pod_ready.go:86] duration metric: took 6.9694ms for pod "etcd-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.294509  508852 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.299446  508852 pod_ready.go:94] pod "kube-apiserver-embed-certs-367691" is "Ready"
	I1227 10:30:21.299515  508852 pod_ready.go:86] duration metric: took 4.981228ms for pod "kube-apiserver-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.302554  508852 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.479699  508852 pod_ready.go:94] pod "kube-controller-manager-embed-certs-367691" is "Ready"
	I1227 10:30:21.479787  508852 pod_ready.go:86] duration metric: took 177.207952ms for pod "kube-controller-manager-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:21.677135  508852 pod_ready.go:83] waiting for pod "kube-proxy-rpjg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:22.077657  508852 pod_ready.go:94] pod "kube-proxy-rpjg8" is "Ready"
	I1227 10:30:22.077687  508852 pod_ready.go:86] duration metric: took 400.481889ms for pod "kube-proxy-rpjg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:22.276738  508852 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:22.677865  508852 pod_ready.go:94] pod "kube-scheduler-embed-certs-367691" is "Ready"
	I1227 10:30:22.677898  508852 pod_ready.go:86] duration metric: took 401.131611ms for pod "kube-scheduler-embed-certs-367691" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:30:22.677911  508852 pod_ready.go:40] duration metric: took 33.968861104s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:30:22.765162  508852 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:30:22.768504  508852 out.go:203] 
	W1227 10:30:22.771321  508852 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:30:22.773991  508852 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:30:22.776865  508852 out.go:179] * Done! kubectl is now configured to use "embed-certs-367691" cluster and "default" namespace by default
	I1227 10:30:20.772330  512231 out.go:252] * Restarting existing docker container for "no-preload-241090" ...
	I1227 10:30:20.772428  512231 cli_runner.go:164] Run: docker start no-preload-241090
	I1227 10:30:21.042721  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:21.067474  512231 kic.go:430] container "no-preload-241090" state is running.
	I1227 10:30:21.067854  512231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241090
	I1227 10:30:21.094104  512231 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/config.json ...
	I1227 10:30:21.094338  512231 machine.go:94] provisionDockerMachine start ...
	I1227 10:30:21.094395  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:21.120355  512231 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:21.120745  512231 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 10:30:21.120755  512231 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:30:21.124005  512231 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56298->127.0.0.1:33443: read: connection reset by peer
	I1227 10:30:24.267855  512231 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-241090
	
	I1227 10:30:24.267880  512231 ubuntu.go:182] provisioning hostname "no-preload-241090"
	I1227 10:30:24.267947  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:24.286053  512231 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:24.286386  512231 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 10:30:24.286404  512231 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-241090 && echo "no-preload-241090" | sudo tee /etc/hostname
	I1227 10:30:24.433517  512231 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-241090
	
	I1227 10:30:24.433624  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:24.452304  512231 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:24.452632  512231 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 10:30:24.452655  512231 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-241090' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-241090/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-241090' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:30:24.596371  512231 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:30:24.596397  512231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:30:24.596428  512231 ubuntu.go:190] setting up certificates
	I1227 10:30:24.596446  512231 provision.go:84] configureAuth start
	I1227 10:30:24.596507  512231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241090
	I1227 10:30:24.614716  512231 provision.go:143] copyHostCerts
	I1227 10:30:24.614786  512231 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:30:24.614811  512231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:30:24.614893  512231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:30:24.615015  512231 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:30:24.615026  512231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:30:24.615060  512231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:30:24.615126  512231 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:30:24.615135  512231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:30:24.615159  512231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:30:24.615221  512231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.no-preload-241090 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-241090]
	I1227 10:30:25.121062  512231 provision.go:177] copyRemoteCerts
	I1227 10:30:25.121143  512231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:30:25.121203  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:25.142688  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:25.246221  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:30:25.267893  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:30:25.287599  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:30:25.306338  512231 provision.go:87] duration metric: took 709.866125ms to configureAuth
	I1227 10:30:25.306369  512231 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:30:25.306611  512231 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:25.306731  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:25.324991  512231 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:25.325310  512231 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1227 10:30:25.325331  512231 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:30:25.706290  512231 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:30:25.706361  512231 machine.go:97] duration metric: took 4.612012852s to provisionDockerMachine
	I1227 10:30:25.706379  512231 start.go:293] postStartSetup for "no-preload-241090" (driver="docker")
	I1227 10:30:25.706391  512231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:30:25.706464  512231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:30:25.706507  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:25.729016  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:25.832429  512231 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:30:25.836180  512231 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:30:25.836210  512231 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:30:25.836241  512231 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:30:25.836320  512231 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:30:25.836440  512231 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:30:25.836551  512231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:30:25.844441  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:30:25.862491  512231 start.go:296] duration metric: took 156.081264ms for postStartSetup
	I1227 10:30:25.862600  512231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:30:25.862661  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:25.879617  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:25.977232  512231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:30:25.982365  512231 fix.go:56] duration metric: took 5.230888252s for fixHost
	I1227 10:30:25.982396  512231 start.go:83] releasing machines lock for "no-preload-241090", held for 5.230949857s
	I1227 10:30:25.982476  512231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-241090
	I1227 10:30:25.999702  512231 ssh_runner.go:195] Run: cat /version.json
	I1227 10:30:25.999763  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:26.000061  512231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:30:26.000139  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:26.026270  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:26.026860  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:26.212864  512231 ssh_runner.go:195] Run: systemctl --version
	I1227 10:30:26.221348  512231 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:30:26.260440  512231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:30:26.265876  512231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:30:26.265962  512231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:30:26.277955  512231 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 10:30:26.277988  512231 start.go:496] detecting cgroup driver to use...
	I1227 10:30:26.278041  512231 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:30:26.278110  512231 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:30:26.293678  512231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:30:26.307120  512231 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:30:26.307208  512231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:30:26.322935  512231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:30:26.337140  512231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:30:26.452292  512231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:30:26.578890  512231 docker.go:234] disabling docker service ...
	I1227 10:30:26.579008  512231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:30:26.595076  512231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:30:26.609182  512231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:30:26.739046  512231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:30:26.863172  512231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:30:26.878046  512231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:30:26.893223  512231 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:30:26.893304  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.902109  512231 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:30:26.902180  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.911470  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.921851  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.933350  512231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:30:26.941800  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.950834  512231 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.959507  512231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:26.968557  512231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:30:26.976567  512231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:30:26.984304  512231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:30:27.099199  512231 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:30:27.278616  512231 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:30:27.278784  512231 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:30:27.283009  512231 start.go:574] Will wait 60s for crictl version
	I1227 10:30:27.283136  512231 ssh_runner.go:195] Run: which crictl
	I1227 10:30:27.287616  512231 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:30:27.318244  512231 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:30:27.318419  512231 ssh_runner.go:195] Run: crio --version
	I1227 10:30:27.350422  512231 ssh_runner.go:195] Run: crio --version
	I1227 10:30:27.384507  512231 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:30:27.387309  512231 cli_runner.go:164] Run: docker network inspect no-preload-241090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:30:27.404596  512231 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:30:27.408625  512231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:30:27.418706  512231 kubeadm.go:884] updating cluster {Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:30:27.418833  512231 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:30:27.418876  512231 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:30:27.455960  512231 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:30:27.456034  512231 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:30:27.456049  512231 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I1227 10:30:27.456146  512231 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-241090 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:30:27.456238  512231 ssh_runner.go:195] Run: crio config
	I1227 10:30:27.529993  512231 cni.go:84] Creating CNI manager for ""
	I1227 10:30:27.530069  512231 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:30:27.530104  512231 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:30:27.530161  512231 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-241090 NodeName:no-preload-241090 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:30:27.530360  512231 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-241090"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:30:27.530478  512231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:30:27.538859  512231 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:30:27.538947  512231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:30:27.546624  512231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 10:30:27.567107  512231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:30:27.580261  512231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1227 10:30:27.592953  512231 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:30:27.596686  512231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:30:27.606875  512231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:30:27.724222  512231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:30:27.744775  512231 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090 for IP: 192.168.85.2
	I1227 10:30:27.744839  512231 certs.go:195] generating shared ca certs ...
	I1227 10:30:27.744871  512231 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:30:27.745049  512231 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:30:27.745119  512231 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:30:27.745142  512231 certs.go:257] generating profile certs ...
	I1227 10:30:27.745277  512231 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/client.key
	I1227 10:30:27.745398  512231 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/apiserver.key.a9feda9d
	I1227 10:30:27.745468  512231 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/proxy-client.key
	I1227 10:30:27.745615  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:30:27.745691  512231 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:30:27.745728  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:30:27.745782  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:30:27.745842  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:30:27.745894  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:30:27.745983  512231 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:30:27.746617  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:30:27.771695  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:30:27.791775  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:30:27.811336  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:30:27.831040  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 10:30:27.850178  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:30:27.868794  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:30:27.895502  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:30:27.914299  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:30:27.941773  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:30:27.962611  512231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:30:27.986709  512231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:30:28.005697  512231 ssh_runner.go:195] Run: openssl version
	I1227 10:30:28.013175  512231 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:30:28.023009  512231 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:30:28.031569  512231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:30:28.036263  512231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:30:28.036384  512231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:30:28.079486  512231 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:30:28.089682  512231 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:30:28.099311  512231 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:30:28.107776  512231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:30:28.111676  512231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:30:28.111738  512231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:30:28.153057  512231 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:30:28.160905  512231 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:30:28.170003  512231 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:30:28.179216  512231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:30:28.183251  512231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:30:28.183330  512231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:30:28.225148  512231 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:30:28.233705  512231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:30:28.237991  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 10:30:28.282700  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 10:30:28.340982  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 10:30:28.397474  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 10:30:28.490293  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 10:30:28.567982  512231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 10:30:28.623176  512231 kubeadm.go:401] StartCluster: {Name:no-preload-241090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-241090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:30:28.623266  512231 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:30:28.623338  512231 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:30:28.690520  512231 cri.go:96] found id: "0be2bd393e285cb49c8e5b5f66063ce6781e934558ad30c47aa3aec488565ab9"
	I1227 10:30:28.690547  512231 cri.go:96] found id: "5ef714a1055a6cf93a2f1f0f649e4d4fa6f789af9150c2755a1c2d09b53037b1"
	I1227 10:30:28.690553  512231 cri.go:96] found id: "4264015374f91b531af599acfc367aa072b442eccc1ffead423255914a0d9f09"
	I1227 10:30:28.690557  512231 cri.go:96] found id: "96e2bc84c864d4d7cc89f0f2517101b59c5cc5096c04209185554cf59b742f37"
	I1227 10:30:28.690561  512231 cri.go:96] found id: ""
	I1227 10:30:28.690613  512231 ssh_runner.go:195] Run: sudo runc list -f json
	W1227 10:30:28.722579  512231 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:30:28Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:30:28.722659  512231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:30:28.737407  512231 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 10:30:28.737497  512231 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 10:30:28.737588  512231 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 10:30:28.757588  512231 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 10:30:28.758521  512231 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-241090" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:30:28.759210  512231 kubeconfig.go:62] /home/jenkins/minikube-integration/22343-297941/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-241090" cluster setting kubeconfig missing "no-preload-241090" context setting]
	I1227 10:30:28.760186  512231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:30:28.762418  512231 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 10:30:28.778475  512231 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 10:30:28.778575  512231 kubeadm.go:602] duration metric: took 41.046513ms to restartPrimaryControlPlane
	I1227 10:30:28.778611  512231 kubeadm.go:403] duration metric: took 155.443736ms to StartCluster
	I1227 10:30:28.778680  512231 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:30:28.778793  512231 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:30:28.780588  512231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:30:28.781253  512231 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:30:28.781631  512231 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:28.781656  512231 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:30:28.781841  512231 addons.go:70] Setting storage-provisioner=true in profile "no-preload-241090"
	I1227 10:30:28.781859  512231 addons.go:239] Setting addon storage-provisioner=true in "no-preload-241090"
	W1227 10:30:28.781866  512231 addons.go:248] addon storage-provisioner should already be in state true
	I1227 10:30:28.781893  512231 host.go:66] Checking if "no-preload-241090" exists ...
	I1227 10:30:28.782352  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:28.782779  512231 addons.go:70] Setting default-storageclass=true in profile "no-preload-241090"
	I1227 10:30:28.782842  512231 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-241090"
	I1227 10:30:28.782968  512231 addons.go:70] Setting dashboard=true in profile "no-preload-241090"
	I1227 10:30:28.783000  512231 addons.go:239] Setting addon dashboard=true in "no-preload-241090"
	W1227 10:30:28.783010  512231 addons.go:248] addon dashboard should already be in state true
	I1227 10:30:28.783032  512231 host.go:66] Checking if "no-preload-241090" exists ...
	I1227 10:30:28.783329  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:28.783514  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:28.787200  512231 out.go:179] * Verifying Kubernetes components...
	I1227 10:30:28.791024  512231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:30:28.855796  512231 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:30:28.867877  512231 addons.go:239] Setting addon default-storageclass=true in "no-preload-241090"
	W1227 10:30:28.867900  512231 addons.go:248] addon default-storageclass should already be in state true
	I1227 10:30:28.867929  512231 host.go:66] Checking if "no-preload-241090" exists ...
	I1227 10:30:28.868377  512231 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:30:28.868563  512231 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:30:28.868586  512231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:30:28.868629  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:28.884138  512231 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 10:30:28.887181  512231 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 10:30:28.890573  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 10:30:28.890603  512231 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 10:30:28.890694  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:28.911513  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:28.921438  512231 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:30:28.921470  512231 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:30:28.921536  512231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:30:28.964989  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:28.964989  512231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:30:29.193255  512231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:30:29.233767  512231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:30:29.243280  512231 node_ready.go:35] waiting up to 6m0s for node "no-preload-241090" to be "Ready" ...
	I1227 10:30:29.250394  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 10:30:29.250576  512231 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 10:30:29.300583  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 10:30:29.300682  512231 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 10:30:29.365244  512231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:30:29.369819  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 10:30:29.369907  512231 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 10:30:29.465499  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 10:30:29.465533  512231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 10:30:29.550590  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 10:30:29.550614  512231 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 10:30:29.633168  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 10:30:29.633193  512231 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 10:30:29.662324  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 10:30:29.662430  512231 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 10:30:29.690834  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 10:30:29.690953  512231 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 10:30:29.719121  512231 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:30:29.719209  512231 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 10:30:29.745996  512231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 10:30:32.307044  512231 node_ready.go:49] node "no-preload-241090" is "Ready"
	I1227 10:30:32.307078  512231 node_ready.go:38] duration metric: took 3.063619964s for node "no-preload-241090" to be "Ready" ...
	I1227 10:30:32.307093  512231 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:30:32.307159  512231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:30:34.069876  512231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.704516105s)
	I1227 10:30:34.070202  512231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.324104161s)
	I1227 10:30:34.070417  512231 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.763241432s)
	I1227 10:30:34.070461  512231 api_server.go:72] duration metric: took 5.289146285s to wait for apiserver process to appear ...
	I1227 10:30:34.070469  512231 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:30:34.070486  512231 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 10:30:34.071520  512231 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.8377211s)
	I1227 10:30:34.073765  512231 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-241090 addons enable metrics-server
	
	I1227 10:30:34.089159  512231 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 10:30:34.091681  512231 api_server.go:141] control plane version: v1.35.0
	I1227 10:30:34.091722  512231 api_server.go:131] duration metric: took 21.246255ms to wait for apiserver health ...
	I1227 10:30:34.091736  512231 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:30:34.105973  512231 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1227 10:30:34.107696  512231 system_pods.go:59] 8 kube-system pods found
	I1227 10:30:34.107744  512231 system_pods.go:61] "coredns-7d764666f9-5p545" [0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:30:34.107753  512231 system_pods.go:61] "etcd-no-preload-241090" [835a968b-a507-4885-a74d-434ece70fa72] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:30:34.107789  512231 system_pods.go:61] "kindnet-jh987" [6cbce1aa-237d-42fa-bc32-dde8b72f3668] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:30:34.107805  512231 system_pods.go:61] "kube-apiserver-no-preload-241090" [e5c18f64-1c76-496a-8dd0-b5cbcfffefb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:30:34.107813  512231 system_pods.go:61] "kube-controller-manager-no-preload-241090" [12e95943-625c-4a69-aeff-d4364483de48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:30:34.107836  512231 system_pods.go:61] "kube-proxy-8xv88" [ffe92c3b-92ca-41f8-91a8-2c0983689068] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:30:34.107869  512231 system_pods.go:61] "kube-scheduler-no-preload-241090" [55ff2824-5114-426e-a833-df3be58eee18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:30:34.107880  512231 system_pods.go:61] "storage-provisioner" [4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf] Running
	I1227 10:30:34.107887  512231 system_pods.go:74] duration metric: took 16.14569ms to wait for pod list to return data ...
	I1227 10:30:34.107902  512231 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:30:34.109021  512231 addons.go:530] duration metric: took 5.327366858s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1227 10:30:34.121053  512231 default_sa.go:45] found service account: "default"
	I1227 10:30:34.121079  512231 default_sa.go:55] duration metric: took 13.17039ms for default service account to be created ...
	I1227 10:30:34.121089  512231 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:30:34.130418  512231 system_pods.go:86] 8 kube-system pods found
	I1227 10:30:34.130454  512231 system_pods.go:89] "coredns-7d764666f9-5p545" [0879e7b0-fd06-4d2e-9f00-9f0aad9cc6d4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:30:34.130464  512231 system_pods.go:89] "etcd-no-preload-241090" [835a968b-a507-4885-a74d-434ece70fa72] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:30:34.130474  512231 system_pods.go:89] "kindnet-jh987" [6cbce1aa-237d-42fa-bc32-dde8b72f3668] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:30:34.130487  512231 system_pods.go:89] "kube-apiserver-no-preload-241090" [e5c18f64-1c76-496a-8dd0-b5cbcfffefb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 10:30:34.130494  512231 system_pods.go:89] "kube-controller-manager-no-preload-241090" [12e95943-625c-4a69-aeff-d4364483de48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:30:34.130502  512231 system_pods.go:89] "kube-proxy-8xv88" [ffe92c3b-92ca-41f8-91a8-2c0983689068] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1227 10:30:34.130513  512231 system_pods.go:89] "kube-scheduler-no-preload-241090" [55ff2824-5114-426e-a833-df3be58eee18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 10:30:34.130518  512231 system_pods.go:89] "storage-provisioner" [4a8f62e4-4f0f-4934-988d-5a7b4bc36ccf] Running
	I1227 10:30:34.130526  512231 system_pods.go:126] duration metric: took 9.431209ms to wait for k8s-apps to be running ...
	I1227 10:30:34.130533  512231 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:30:34.130589  512231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:30:34.148404  512231 system_svc.go:56] duration metric: took 17.860136ms WaitForService to wait for kubelet
	I1227 10:30:34.148431  512231 kubeadm.go:587] duration metric: took 5.36711424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:30:34.148456  512231 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:30:34.159277  512231 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:30:34.159308  512231 node_conditions.go:123] node cpu capacity is 2
	I1227 10:30:34.159322  512231 node_conditions.go:105] duration metric: took 10.860779ms to run NodePressure ...
	I1227 10:30:34.159336  512231 start.go:242] waiting for startup goroutines ...
	I1227 10:30:34.159343  512231 start.go:247] waiting for cluster config update ...
	I1227 10:30:34.159354  512231 start.go:256] writing updated cluster config ...
	I1227 10:30:34.159628  512231 ssh_runner.go:195] Run: rm -f paused
	I1227 10:30:34.164216  512231 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:30:34.168369  512231 pod_ready.go:83] waiting for pod "coredns-7d764666f9-5p545" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 10:30:36.236511  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:30:38.674751  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.336778427Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.343014478Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.343181372Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.343285192Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.360401452Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.360577528Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.36065889Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.372331633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.372496081Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.372568927Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.380386336Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:30:28 embed-certs-367691 crio[655]: time="2025-12-27T10:30:28.380559056Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.482745838Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=eae0753a-76f8-44db-8303-00daa7397fb7 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.485350476Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ffd455f0-2025-41c3-97e4-36b98daedf07 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.487898793Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt/dashboard-metrics-scraper" id=31fd7413-8ff7-44e9-bbcf-3a7356a35b4b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.488098491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.499506246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.500490314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.529843202Z" level=info msg="Created container d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt/dashboard-metrics-scraper" id=31fd7413-8ff7-44e9-bbcf-3a7356a35b4b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.531096261Z" level=info msg="Starting container: d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906" id=39ae6d3e-a8c0-436f-8ef3-17330159a43c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.536011462Z" level=info msg="Started container" PID=1743 containerID=d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt/dashboard-metrics-scraper id=39ae6d3e-a8c0-436f-8ef3-17330159a43c name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e905e033fb00c3da0705e22b29c7e3ab63606db1b08b5e9287ecfd0333a50e8
	Dec 27 10:30:33 embed-certs-367691 conmon[1741]: conmon d3e7508c9848f2bffdb6 <ninfo>: container 1743 exited with status 1
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.692350239Z" level=info msg="Removing container: 614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad" id=9561ba05-e0df-4702-a01d-128e1e80625a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.710609468Z" level=info msg="Error loading conmon cgroup of container 614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad: cgroup deleted" id=9561ba05-e0df-4702-a01d-128e1e80625a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:30:33 embed-certs-367691 crio[655]: time="2025-12-27T10:30:33.716526082Z" level=info msg="Removed container 614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt/dashboard-metrics-scraper" id=9561ba05-e0df-4702-a01d-128e1e80625a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d3e7508c9848f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   2e905e033fb00       dashboard-metrics-scraper-867fb5f87b-vp5vt   kubernetes-dashboard
	ea8f6e14e8171       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago      Running             storage-provisioner         2                   6e72752f3bb39       storage-provisioner                          kube-system
	f2f8c2294cb00       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago      Running             kubernetes-dashboard        0                   57f14e7873b7c       kubernetes-dashboard-b84665fb8-27bs2         kubernetes-dashboard
	e7424ec6139f7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago      Running             busybox                     1                   a4a69001335f2       busybox                                      default
	7da2abbec178f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago      Exited              storage-provisioner         1                   6e72752f3bb39       storage-provisioner                          kube-system
	ee29ffd6fc0fe       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           54 seconds ago      Running             kube-proxy                  1                   6dbcf42befb8c       kube-proxy-rpjg8                             kube-system
	40795bfae7ba1       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           54 seconds ago      Running             coredns                     1                   87c3aea365a3a       coredns-7d764666f9-t88nq                     kube-system
	8cd86734bcd50       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           54 seconds ago      Running             kindnet-cni                 1                   a86687f05ed71       kindnet-8pr87                                kube-system
	7c5d69297d877       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           59 seconds ago      Running             kube-apiserver              1                   66e7d464c9f90       kube-apiserver-embed-certs-367691            kube-system
	56a9c2fee9e20       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           59 seconds ago      Running             kube-controller-manager     1                   16a24cb0da0db       kube-controller-manager-embed-certs-367691   kube-system
	a108df6f898b1       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           59 seconds ago      Running             kube-scheduler              1                   0ee1b84cbef98       kube-scheduler-embed-certs-367691            kube-system
	8c4fb8d9010ff       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           59 seconds ago      Running             etcd                        1                   6f6775a59872d       etcd-embed-certs-367691                      kube-system
	
	
	==> coredns [40795bfae7ba1aa8d709cee7fd131cac1b3e5cf4104406b7db981778a9131eb0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53628 - 13084 "HINFO IN 6049901764671328879.6829452539217477503. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004068119s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-367691
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-367691
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=embed-certs-367691
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_28_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:28:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-367691
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:30:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:30:17 +0000   Sat, 27 Dec 2025 10:28:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:30:17 +0000   Sat, 27 Dec 2025 10:28:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:30:17 +0000   Sat, 27 Dec 2025 10:28:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:30:17 +0000   Sat, 27 Dec 2025 10:29:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-367691
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                220a60ff-ddbf-4af6-ab3b-b3aec69cd7bb
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-t88nq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-embed-certs-367691                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-8pr87                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-367691             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-367691    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-rpjg8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-367691             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-vp5vt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-27bs2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node embed-certs-367691 event: Registered Node embed-certs-367691 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node embed-certs-367691 event: Registered Node embed-certs-367691 in Controller
	
	
	==> dmesg <==
	[Dec27 10:00] overlayfs: idmapped layers are currently not supported
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	[Dec27 10:28] overlayfs: idmapped layers are currently not supported
	[Dec27 10:29] overlayfs: idmapped layers are currently not supported
	[ +34.978626] overlayfs: idmapped layers are currently not supported
	[Dec27 10:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8c4fb8d9010ff30eec94b0fbcdd2a5948b473223b17ca2f2a4b0ce18bedff071] <==
	{"level":"info","ts":"2025-12-27T10:29:43.173846Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:29:43.173855Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:29:43.174029Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:29:43.174039Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:29:43.195523Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-27T10:29:43.195625Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:29:43.195693Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:29:43.538162Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:29:43.540446Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:29:43.540511Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:29:43.540524Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:29:43.540539Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:29:43.544919Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:29:43.544975Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:29:43.545004Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:29:43.545635Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:29:43.549114Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-367691 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:29:43.549235Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:29:43.550134Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:29:43.557309Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:29:43.563082Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-27T10:29:43.565563Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:29:43.565600Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:29:43.581359Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:29:43.582103Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:30:42 up  2:13,  0 user,  load average: 5.16, 2.89, 2.27
	Linux embed-certs-367691 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cd86734bcd509d8f341a879f4a8b5dd15f4639c8986a100aec8b8c61e2c100f] <==
	I1227 10:29:48.036386       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:29:48.037019       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:29:48.037257       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:29:48.037451       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:29:48.037533       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:29:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:29:48.321121       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:29:48.321722       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:29:48.321831       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:29:48.322202       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:30:18.324046       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:30:18.324051       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:30:18.324119       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 10:30:18.324253       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 10:30:19.822210       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:30:19.822255       1 metrics.go:72] Registering metrics
	I1227 10:30:19.822344       1 controller.go:711] "Syncing nftables rules"
	I1227 10:30:28.324050       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:30:28.324763       1 main.go:301] handling current node
	I1227 10:30:38.321081       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1227 10:30:38.321129       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7c5d69297d8771e299ca8b09a7cae96c2e7c5f87879fd1c567112742214e35f3] <==
	I1227 10:29:46.748922       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:29:46.787150       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:29:46.801288       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:46.807436       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:29:46.807951       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:29:46.813891       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1227 10:29:46.813986       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:46.830960       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:29:46.840097       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 10:29:46.840368       1 aggregator.go:187] initial CRD sync complete...
	I1227 10:29:46.840383       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 10:29:46.840390       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:29:46.840397       1 cache.go:39] Caches are synced for autoregister controller
	E1227 10:29:46.875707       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:29:47.322680       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:29:47.534688       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:29:47.608685       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:29:47.815351       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:29:47.874448       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:29:47.915570       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:29:48.068553       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.248.207"}
	I1227 10:29:48.099309       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.189.29"}
	I1227 10:29:50.095360       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:29:50.197764       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:29:50.246824       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [56a9c2fee9e20cf978b01d8726c038b4c28e466158c9a28f5f3fdc75e851a27d] <==
	I1227 10:29:49.725400       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.725414       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.725459       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.725494       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.729713       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.729801       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.729860       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 10:29:49.729932       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-367691"
	I1227 10:29:49.729980       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1227 10:29:49.730008       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.732836       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.732874       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.732912       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.732959       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.733057       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.733568       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.733618       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:29:49.733639       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:29:49.733649       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:29:49.733668       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.771238       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.804137       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.823895       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:49.823921       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:29:49.823927       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [ee29ffd6fc0fea1a7798eaff2aac02990f393036d328855d43f123ce95af833f] <==
	I1227 10:29:48.260865       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:29:48.357840       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:29:48.458797       1 shared_informer.go:377] "Caches are synced"
	I1227 10:29:48.458832       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:29:48.458908       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:29:48.477835       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:29:48.477898       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:29:48.483582       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:29:48.484335       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:29:48.484400       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:29:48.491113       1 config.go:200] "Starting service config controller"
	I1227 10:29:48.491689       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:29:48.491772       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:29:48.491802       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:29:48.491842       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:29:48.491870       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:29:48.495041       1 config.go:309] "Starting node config controller"
	I1227 10:29:48.498558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:29:48.498648       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:29:48.592804       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:29:48.592841       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:29:48.593421       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a108df6f898b109467fab72294c0412641c1e5d2b2ea82f9edf2b1b962883dcf] <==
	I1227 10:29:44.405999       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:29:46.536060       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:29:46.536112       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:29:46.536133       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:29:46.536144       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:29:46.728159       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:29:46.728185       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:29:46.763746       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:29:46.766949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:29:46.767010       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:29:46.768721       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:29:46.871231       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:30:03 embed-certs-367691 kubelet[787]: I1227 10:30:03.990159     787 scope.go:122] "RemoveContainer" containerID="8a012634efd3c64567508c28055ef95dab2cb004621f832697633648ac76ec2a"
	Dec 27 10:30:03 embed-certs-367691 kubelet[787]: E1227 10:30:03.990340     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vp5vt_kubernetes-dashboard(f7fb8eda-d289-40b2-a424-a3b818cac4ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" podUID="f7fb8eda-d289-40b2-a424-a3b818cac4ad"
	Dec 27 10:30:06 embed-certs-367691 kubelet[787]: E1227 10:30:06.481298     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:06 embed-certs-367691 kubelet[787]: I1227 10:30:06.483516     787 scope.go:122] "RemoveContainer" containerID="8a012634efd3c64567508c28055ef95dab2cb004621f832697633648ac76ec2a"
	Dec 27 10:30:06 embed-certs-367691 kubelet[787]: I1227 10:30:06.616881     787 scope.go:122] "RemoveContainer" containerID="8a012634efd3c64567508c28055ef95dab2cb004621f832697633648ac76ec2a"
	Dec 27 10:30:07 embed-certs-367691 kubelet[787]: E1227 10:30:07.621519     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:07 embed-certs-367691 kubelet[787]: I1227 10:30:07.621561     787 scope.go:122] "RemoveContainer" containerID="614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad"
	Dec 27 10:30:07 embed-certs-367691 kubelet[787]: E1227 10:30:07.621714     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vp5vt_kubernetes-dashboard(f7fb8eda-d289-40b2-a424-a3b818cac4ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" podUID="f7fb8eda-d289-40b2-a424-a3b818cac4ad"
	Dec 27 10:30:13 embed-certs-367691 kubelet[787]: E1227 10:30:13.990168     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:13 embed-certs-367691 kubelet[787]: I1227 10:30:13.990232     787 scope.go:122] "RemoveContainer" containerID="614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad"
	Dec 27 10:30:13 embed-certs-367691 kubelet[787]: E1227 10:30:13.990444     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vp5vt_kubernetes-dashboard(f7fb8eda-d289-40b2-a424-a3b818cac4ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" podUID="f7fb8eda-d289-40b2-a424-a3b818cac4ad"
	Dec 27 10:30:18 embed-certs-367691 kubelet[787]: I1227 10:30:18.649676     787 scope.go:122] "RemoveContainer" containerID="7da2abbec178f9d68b05cbeba7d3f44de1cc48a998eac2bb77b106a98e4f6efb"
	Dec 27 10:30:21 embed-certs-367691 kubelet[787]: E1227 10:30:21.121215     787 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-t88nq" containerName="coredns"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: E1227 10:30:33.481487     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: I1227 10:30:33.482016     787 scope.go:122] "RemoveContainer" containerID="614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: I1227 10:30:33.689612     787 scope.go:122] "RemoveContainer" containerID="614c4c396f62402cbc690509e9db79e42cd747468fe45c780604cf19309e8aad"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: E1227 10:30:33.689951     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: I1227 10:30:33.689991     787 scope.go:122] "RemoveContainer" containerID="d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906"
	Dec 27 10:30:33 embed-certs-367691 kubelet[787]: E1227 10:30:33.690213     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vp5vt_kubernetes-dashboard(f7fb8eda-d289-40b2-a424-a3b818cac4ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" podUID="f7fb8eda-d289-40b2-a424-a3b818cac4ad"
	Dec 27 10:30:34 embed-certs-367691 kubelet[787]: E1227 10:30:34.693647     787 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:34 embed-certs-367691 kubelet[787]: I1227 10:30:34.693686     787 scope.go:122] "RemoveContainer" containerID="d3e7508c9848f2bffdb6064e60fd7dcd868760c4d458b2c87881edbb699f8906"
	Dec 27 10:30:34 embed-certs-367691 kubelet[787]: E1227 10:30:34.693830     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-vp5vt_kubernetes-dashboard(f7fb8eda-d289-40b2-a424-a3b818cac4ad)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-vp5vt" podUID="f7fb8eda-d289-40b2-a424-a3b818cac4ad"
	Dec 27 10:30:36 embed-certs-367691 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:30:36 embed-certs-367691 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:30:36 embed-certs-367691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f2f8c2294cb00b78dfda28e1049259ceffb42f141d0b7661ada337ade587fa06] <==
	2025/12/27 10:30:01 Using namespace: kubernetes-dashboard
	2025/12/27 10:30:01 Using in-cluster config to connect to apiserver
	2025/12/27 10:30:01 Using secret token for csrf signing
	2025/12/27 10:30:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:30:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:30:01 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:30:01 Generating JWE encryption key
	2025/12/27 10:30:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:30:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:30:02 Initializing JWE encryption key from synchronized object
	2025/12/27 10:30:02 Creating in-cluster Sidecar client
	2025/12/27 10:30:02 Serving insecurely on HTTP port: 9090
	2025/12/27 10:30:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:30:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:30:01 Starting overwatch
	
	
	==> storage-provisioner [7da2abbec178f9d68b05cbeba7d3f44de1cc48a998eac2bb77b106a98e4f6efb] <==
	I1227 10:29:48.194004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:30:18.196149       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ea8f6e14e817184fe51d0f5957f0409363ca444db8803a84523e0966ea183e7f] <==
	I1227 10:30:18.698167       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:30:18.713184       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:30:18.713328       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:30:18.716431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:22.171189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:26.431650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:30.030613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:33.083983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:36.106369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:36.115100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:30:36.115826       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:30:36.116106       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-367691_7b0c10ca-690e-454c-af46-090e71292d00!
	I1227 10:30:36.117415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41e16e59-3f7f-429b-a593-eb5c08bee361", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-367691_7b0c10ca-690e-454c-af46-090e71292d00 became leader
	W1227 10:30:36.126630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:36.155349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:30:36.227454       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-367691_7b0c10ca-690e-454c-af46-090e71292d00!
	W1227 10:30:38.158548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:38.177222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:40.181512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:40.194440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:42.198192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:30:42.224706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-367691 -n embed-certs-367691
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-367691 -n embed-certs-367691: exit status 2 (497.932638ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-367691 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-443576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-443576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (301.386366ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-443576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-443576
helpers_test.go:244: (dbg) docker inspect newest-cni-443576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979",
	        "Created": "2025-12-27T10:30:53.483860982Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 516197,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:30:53.549445796Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/hosts",
	        "LogPath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979-json.log",
	        "Name": "/newest-cni-443576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-443576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-443576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979",
	                "LowerDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-443576",
	                "Source": "/var/lib/docker/volumes/newest-cni-443576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-443576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-443576",
	                "name.minikube.sigs.k8s.io": "newest-cni-443576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f5633ec8b696b64cffc0069b356b5b7f7651d17c2bb9ca02528fb99a597ec2ca",
	            "SandboxKey": "/var/run/docker/netns/f5633ec8b696",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-443576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:b6:bd:bb:9b:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c76023b32637880f7809253f7e724cbfc74cd2ad7e3ca1594922140ba274d2b",
	                    "EndpointID": "071cb81f162da8206838fd88c86e54c5271844badd138b2f7976bec779c48a3e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-443576",
	                        "1f8734c86b7f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-443576 -n newest-cni-443576
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-443576 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-443576 logs -n 25: (1.812313769s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-784377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ start   │ -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:27 UTC │ 27 Dec 25 10:27 UTC │
	│ image   │ default-k8s-diff-port-784377 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ pause   │ -p default-k8s-diff-port-784377 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ ssh     │ force-systemd-flag-915850 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p force-systemd-flag-915850                                                                                                                                                                                                                  │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p disable-driver-mounts-913868                                                                                                                                                                                                               │ disable-driver-mounts-913868 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable metrics-server -p embed-certs-367691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │                     │
	│ stop    │ -p embed-certs-367691 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable dashboard -p embed-certs-367691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable metrics-server -p no-preload-241090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ stop    │ -p no-preload-241090 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable dashboard -p no-preload-241090 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:31 UTC │
	│ image   │ embed-certs-367691 image list --format=json                                                                                                                                                                                                   │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ pause   │ -p embed-certs-367691 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ delete  │ -p embed-certs-367691                                                                                                                                                                                                                         │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ delete  │ -p embed-certs-367691                                                                                                                                                                                                                         │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-443576            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:31 UTC │
	│ addons  │ enable metrics-server -p newest-cni-443576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-443576            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:30:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:30:47.739844  515697 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:30:47.740038  515697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:30:47.740047  515697 out.go:374] Setting ErrFile to fd 2...
	I1227 10:30:47.740054  515697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:30:47.740332  515697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:30:47.740794  515697 out.go:368] Setting JSON to false
	I1227 10:30:47.741857  515697 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8001,"bootTime":1766823447,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:30:47.741939  515697 start.go:143] virtualization:  
	I1227 10:30:47.747727  515697 out.go:179] * [newest-cni-443576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:30:47.751829  515697 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:30:47.751932  515697 notify.go:221] Checking for updates...
	I1227 10:30:47.759056  515697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:30:47.762561  515697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:30:47.766284  515697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:30:47.769538  515697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:30:47.772732  515697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:30:47.776500  515697 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:47.776596  515697 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:30:47.810586  515697 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:30:47.810719  515697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:30:47.909542  515697 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:30:47.897856362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:30:47.909652  515697 docker.go:319] overlay module found
	I1227 10:30:47.913030  515697 out.go:179] * Using the docker driver based on user configuration
	I1227 10:30:47.916261  515697 start.go:309] selected driver: docker
	I1227 10:30:47.916309  515697 start.go:928] validating driver "docker" against <nil>
	I1227 10:30:47.916333  515697 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:30:47.917340  515697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:30:48.044579  515697 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:30:48.032466615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:30:48.044739  515697 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1227 10:30:48.044764  515697 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1227 10:30:48.045079  515697 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:30:48.049627  515697 out.go:179] * Using Docker driver with root privileges
	I1227 10:30:48.054005  515697 cni.go:84] Creating CNI manager for ""
	I1227 10:30:48.054098  515697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:30:48.054108  515697 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:30:48.054187  515697 start.go:353] cluster config:
	{Name:newest-cni-443576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:30:48.057640  515697 out.go:179] * Starting "newest-cni-443576" primary control-plane node in "newest-cni-443576" cluster
	I1227 10:30:48.061802  515697 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:30:48.065153  515697 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:30:48.068050  515697 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:30:48.068103  515697 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:30:48.068133  515697 cache.go:65] Caching tarball of preloaded images
	I1227 10:30:48.068254  515697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:30:48.068576  515697 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:30:48.068591  515697 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:30:48.068721  515697 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/config.json ...
	I1227 10:30:48.068741  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/config.json: {Name:mk1f39da38d1a500495171d6f6e58e129f2d3616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:30:48.091008  515697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:30:48.091030  515697 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:30:48.091045  515697 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:30:48.091075  515697 start.go:360] acquireMachinesLock for newest-cni-443576: {Name:mka565ad41fecac1e9f8cd8d651491fd96f86258 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:48.091181  515697 start.go:364] duration metric: took 88.419µs to acquireMachinesLock for "newest-cni-443576"
	I1227 10:30:48.091206  515697 start.go:93] Provisioning new machine with config: &{Name:newest-cni-443576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:30:48.091278  515697 start.go:125] createHost starting for "" (driver="docker")
	W1227 10:30:45.676890  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:30:48.195140  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	I1227 10:30:48.095956  515697 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:30:48.096246  515697 start.go:159] libmachine.API.Create for "newest-cni-443576" (driver="docker")
	I1227 10:30:48.096288  515697 client.go:173] LocalClient.Create starting
	I1227 10:30:48.096368  515697 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:30:48.096401  515697 main.go:144] libmachine: Decoding PEM data...
	I1227 10:30:48.096417  515697 main.go:144] libmachine: Parsing certificate...
	I1227 10:30:48.096467  515697 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:30:48.096483  515697 main.go:144] libmachine: Decoding PEM data...
	I1227 10:30:48.096494  515697 main.go:144] libmachine: Parsing certificate...
	I1227 10:30:48.096848  515697 cli_runner.go:164] Run: docker network inspect newest-cni-443576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:30:48.114859  515697 cli_runner.go:211] docker network inspect newest-cni-443576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:30:48.114965  515697 network_create.go:284] running [docker network inspect newest-cni-443576] to gather additional debugging logs...
	I1227 10:30:48.114982  515697 cli_runner.go:164] Run: docker network inspect newest-cni-443576
	W1227 10:30:48.144375  515697 cli_runner.go:211] docker network inspect newest-cni-443576 returned with exit code 1
	I1227 10:30:48.144408  515697 network_create.go:287] error running [docker network inspect newest-cni-443576]: docker network inspect newest-cni-443576: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-443576 not found
	I1227 10:30:48.144421  515697 network_create.go:289] output of [docker network inspect newest-cni-443576]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-443576 not found
	
	** /stderr **
	I1227 10:30:48.144529  515697 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:30:48.166328  515697 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:30:48.166786  515697 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:30:48.167134  515697 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:30:48.167635  515697 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a46ac0}
	I1227 10:30:48.167654  515697 network_create.go:124] attempt to create docker network newest-cni-443576 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:30:48.167778  515697 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-443576 newest-cni-443576
	I1227 10:30:48.246484  515697 network_create.go:108] docker network newest-cni-443576 192.168.76.0/24 created
	I1227 10:30:48.246520  515697 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-443576" container
	I1227 10:30:48.246603  515697 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:30:48.265812  515697 cli_runner.go:164] Run: docker volume create newest-cni-443576 --label name.minikube.sigs.k8s.io=newest-cni-443576 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:30:48.283410  515697 oci.go:103] Successfully created a docker volume newest-cni-443576
	I1227 10:30:48.283512  515697 cli_runner.go:164] Run: docker run --rm --name newest-cni-443576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-443576 --entrypoint /usr/bin/test -v newest-cni-443576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:30:49.284088  515697 cli_runner.go:217] Completed: docker run --rm --name newest-cni-443576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-443576 --entrypoint /usr/bin/test -v newest-cni-443576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (1.000535306s)
	I1227 10:30:49.284133  515697 oci.go:107] Successfully prepared a docker volume newest-cni-443576
	I1227 10:30:49.284184  515697 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:30:49.284200  515697 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:30:49.284261  515697 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-443576:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	W1227 10:30:50.674556  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:30:53.174786  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	I1227 10:30:53.412991  515697 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-443576:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.128688994s)
	I1227 10:30:53.413027  515697 kic.go:203] duration metric: took 4.128824437s to extract preloaded images to volume ...
	W1227 10:30:53.413183  515697 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:30:53.413299  515697 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:30:53.468731  515697 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-443576 --name newest-cni-443576 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-443576 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-443576 --network newest-cni-443576 --ip 192.168.76.2 --volume newest-cni-443576:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:30:53.782640  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Running}}
	I1227 10:30:53.808691  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:30:53.841457  515697 cli_runner.go:164] Run: docker exec newest-cni-443576 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:30:53.902230  515697 oci.go:144] the created container "newest-cni-443576" has a running status.
	I1227 10:30:53.902280  515697 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa...
	I1227 10:30:54.134165  515697 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:30:54.161635  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:30:54.188686  515697 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:30:54.188713  515697 kic_runner.go:114] Args: [docker exec --privileged newest-cni-443576 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:30:54.270526  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:30:54.293932  515697 machine.go:94] provisionDockerMachine start ...
	I1227 10:30:54.294032  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:54.321731  515697 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:54.322136  515697 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 10:30:54.322153  515697 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:30:54.322764  515697 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60304->127.0.0.1:33448: read: connection reset by peer
	I1227 10:30:57.463600  515697 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-443576
	
	I1227 10:30:57.463624  515697 ubuntu.go:182] provisioning hostname "newest-cni-443576"
	I1227 10:30:57.463697  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:57.481996  515697 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:57.482315  515697 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 10:30:57.482327  515697 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-443576 && echo "newest-cni-443576" | sudo tee /etc/hostname
	I1227 10:30:57.634880  515697 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-443576
	
	I1227 10:30:57.635035  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:57.653698  515697 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:57.654022  515697 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 10:30:57.654038  515697 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-443576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-443576/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-443576' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1227 10:30:55.674144  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:30:57.674399  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:30:59.675440  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	I1227 10:30:57.800296  515697 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:30:57.800325  515697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:30:57.800373  515697 ubuntu.go:190] setting up certificates
	I1227 10:30:57.800392  515697 provision.go:84] configureAuth start
	I1227 10:30:57.800462  515697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-443576
	I1227 10:30:57.821549  515697 provision.go:143] copyHostCerts
	I1227 10:30:57.821650  515697 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:30:57.821665  515697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:30:57.821744  515697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:30:57.821846  515697 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:30:57.821855  515697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:30:57.821885  515697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:30:57.821955  515697 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:30:57.821963  515697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:30:57.821989  515697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:30:57.822047  515697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.newest-cni-443576 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-443576]
	I1227 10:30:58.127367  515697 provision.go:177] copyRemoteCerts
	I1227 10:30:58.127494  515697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:30:58.127581  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:58.145682  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:30:58.244197  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:30:58.263945  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:30:58.282694  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:30:58.302309  515697 provision.go:87] duration metric: took 501.893829ms to configureAuth
	I1227 10:30:58.302339  515697 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:30:58.302536  515697 config.go:182] Loaded profile config "newest-cni-443576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:58.302645  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:58.321013  515697 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:58.321332  515697 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 10:30:58.321355  515697 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:30:58.696058  515697 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:30:58.696084  515697 machine.go:97] duration metric: took 4.402132533s to provisionDockerMachine
	I1227 10:30:58.696096  515697 client.go:176] duration metric: took 10.599800622s to LocalClient.Create
	I1227 10:30:58.696109  515697 start.go:167] duration metric: took 10.5998655s to libmachine.API.Create "newest-cni-443576"
	I1227 10:30:58.696116  515697 start.go:293] postStartSetup for "newest-cni-443576" (driver="docker")
	I1227 10:30:58.696126  515697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:30:58.696191  515697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:30:58.696243  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:58.713599  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:30:58.822674  515697 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:30:58.826281  515697 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:30:58.826308  515697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:30:58.826320  515697 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:30:58.826375  515697 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:30:58.826467  515697 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:30:58.826575  515697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:30:58.835567  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:30:58.854256  515697 start.go:296] duration metric: took 158.12479ms for postStartSetup
	I1227 10:30:58.854654  515697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-443576
	I1227 10:30:58.872667  515697 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/config.json ...
	I1227 10:30:58.872943  515697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:30:58.872997  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:58.891074  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:30:58.989177  515697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:30:58.993941  515697 start.go:128] duration metric: took 10.902647547s to createHost
	I1227 10:30:58.993965  515697 start.go:83] releasing machines lock for "newest-cni-443576", held for 10.902775319s
	I1227 10:30:58.994037  515697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-443576
	I1227 10:30:59.015419  515697 ssh_runner.go:195] Run: cat /version.json
	I1227 10:30:59.015470  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:59.015526  515697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:30:59.015601  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:59.037766  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:30:59.039181  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:30:59.131963  515697 ssh_runner.go:195] Run: systemctl --version
	I1227 10:30:59.251466  515697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:30:59.293103  515697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:30:59.297685  515697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:30:59.297763  515697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:30:59.331251  515697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:30:59.331285  515697 start.go:496] detecting cgroup driver to use...
	I1227 10:30:59.331320  515697 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:30:59.331375  515697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:30:59.352837  515697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:30:59.366833  515697 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:30:59.366898  515697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:30:59.385453  515697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:30:59.405856  515697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:30:59.528930  515697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:30:59.655137  515697 docker.go:234] disabling docker service ...
	I1227 10:30:59.655258  515697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:30:59.683279  515697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:30:59.697894  515697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:30:59.826458  515697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:30:59.954467  515697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:30:59.969672  515697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:30:59.987127  515697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:30:59.987205  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:59.996094  515697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:30:59.996175  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.047420  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.098178  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.130612  515697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:31:00.145558  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.241379  515697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.303749  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.326397  515697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:31:00.340982  515697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:31:00.355141  515697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:31:00.533793  515697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:31:00.705140  515697 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:31:00.705238  515697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:31:00.709612  515697 start.go:574] Will wait 60s for crictl version
	I1227 10:31:00.709681  515697 ssh_runner.go:195] Run: which crictl
	I1227 10:31:00.713569  515697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:31:00.739464  515697 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:31:00.739553  515697 ssh_runner.go:195] Run: crio --version
	I1227 10:31:00.768399  515697 ssh_runner.go:195] Run: crio --version
	I1227 10:31:00.800923  515697 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:31:00.803896  515697 cli_runner.go:164] Run: docker network inspect newest-cni-443576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:31:00.823946  515697 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:31:00.828025  515697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:31:00.841281  515697 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 10:31:00.844251  515697 kubeadm.go:884] updating cluster {Name:newest-cni-443576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:31:00.844404  515697 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:31:00.844479  515697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:31:00.880209  515697 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:31:00.880240  515697 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:31:00.880297  515697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:31:00.905979  515697 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:31:00.906004  515697 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:31:00.906014  515697 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:31:00.906104  515697 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-443576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:31:00.906192  515697 ssh_runner.go:195] Run: crio config
	I1227 10:31:00.986419  515697 cni.go:84] Creating CNI manager for ""
	I1227 10:31:00.986443  515697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:31:00.986465  515697 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 10:31:00.986490  515697 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-443576 NodeName:newest-cni-443576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:31:00.986618  515697 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-443576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:31:00.986689  515697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:31:01.002734  515697 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:31:01.002824  515697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:31:01.013674  515697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 10:31:01.029670  515697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:31:01.043495  515697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1227 10:31:01.058211  515697 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:31:01.062074  515697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:31:01.073140  515697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:31:01.202135  515697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:31:01.220551  515697 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576 for IP: 192.168.76.2
	I1227 10:31:01.220572  515697 certs.go:195] generating shared ca certs ...
	I1227 10:31:01.220589  515697 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.220742  515697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:31:01.220785  515697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:31:01.220794  515697 certs.go:257] generating profile certs ...
	I1227 10:31:01.220855  515697 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.key
	I1227 10:31:01.220866  515697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.crt with IP's: []
	I1227 10:31:01.500299  515697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.crt ...
	I1227 10:31:01.500330  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.crt: {Name:mk27b2f1703e7ad03071d745625e8d67bf1df612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.500554  515697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.key ...
	I1227 10:31:01.500572  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.key: {Name:mk404625f8d36cbd78f1b568e4ef9e18bb075ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.500661  515697 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key.ca20e437
	I1227 10:31:01.500680  515697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt.ca20e437 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 10:31:01.588269  515697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt.ca20e437 ...
	I1227 10:31:01.588297  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt.ca20e437: {Name:mk29454875d1a4a7ee8adc3fcaf51d5bb4d705ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.588468  515697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key.ca20e437 ...
	I1227 10:31:01.588483  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key.ca20e437: {Name:mk2a7f5eccc8ec49ed8c6efb935a9f8f9bfcde90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.588569  515697 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt.ca20e437 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt
	I1227 10:31:01.588646  515697 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key.ca20e437 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key
	I1227 10:31:01.588717  515697 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.key
	I1227 10:31:01.588737  515697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.crt with IP's: []
	I1227 10:31:01.832033  515697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.crt ...
	I1227 10:31:01.832067  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.crt: {Name:mk5735db882065a1ec364cd7306f56721cca6054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.832257  515697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.key ...
	I1227 10:31:01.832272  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.key: {Name:mk1720aa67684702f71e0f4dddbb7c41098f2696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.832472  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:31:01.832519  515697 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:31:01.832534  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:31:01.832560  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:31:01.832588  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:31:01.832617  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:31:01.832667  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:31:01.833253  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:31:01.853223  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:31:01.873422  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:31:01.892060  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:31:01.911133  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 10:31:01.929859  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:31:01.948877  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:31:01.982511  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:31:02.005482  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:31:02.029021  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:31:02.052055  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:31:02.073829  515697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:31:02.088165  515697 ssh_runner.go:195] Run: openssl version
	I1227 10:31:02.097022  515697 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:31:02.107255  515697 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:31:02.116894  515697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:31:02.121062  515697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:31:02.121131  515697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:31:02.162871  515697 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:31:02.173424  515697 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2998112.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:31:02.182302  515697 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:31:02.190406  515697 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:31:02.198225  515697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:31:02.202074  515697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:31:02.202144  515697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:31:02.244791  515697 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:31:02.254077  515697 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:31:02.262505  515697 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:31:02.270692  515697 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:31:02.278658  515697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:31:02.282361  515697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:31:02.282427  515697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:31:02.324496  515697 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:31:02.332690  515697 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/299811.pem /etc/ssl/certs/51391683.0
	I1227 10:31:02.340741  515697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:31:02.344969  515697 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:31:02.345067  515697 kubeadm.go:401] StartCluster: {Name:newest-cni-443576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:31:02.345192  515697 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:31:02.345259  515697 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:31:02.373208  515697 cri.go:96] found id: ""
	I1227 10:31:02.373283  515697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:31:02.381631  515697 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:31:02.390013  515697 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:31:02.390109  515697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:31:02.398535  515697 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:31:02.398554  515697 kubeadm.go:158] found existing configuration files:
	
	I1227 10:31:02.398610  515697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:31:02.406819  515697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:31:02.406893  515697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:31:02.415759  515697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:31:02.424440  515697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:31:02.424540  515697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:31:02.432344  515697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:31:02.440655  515697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:31:02.440751  515697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:31:02.448613  515697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:31:02.456813  515697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:31:02.456944  515697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:31:02.466276  515697 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:31:02.505238  515697 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:31:02.505302  515697 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:31:02.581332  515697 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:31:02.581409  515697 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:31:02.581450  515697 kubeadm.go:319] OS: Linux
	I1227 10:31:02.581500  515697 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:31:02.581553  515697 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:31:02.581604  515697 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:31:02.581655  515697 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:31:02.581707  515697 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:31:02.581759  515697 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:31:02.581805  515697 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:31:02.581857  515697 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:31:02.581907  515697 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:31:02.650205  515697 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:31:02.650366  515697 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:31:02.650499  515697 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:31:02.660124  515697 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:31:02.666548  515697 out.go:252]   - Generating certificates and keys ...
	I1227 10:31:02.666692  515697 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:31:02.666783  515697 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1227 10:31:01.676557  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:31:04.175007  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	I1227 10:31:02.753120  515697 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:31:03.008736  515697 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:31:03.355280  515697 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:31:03.873610  515697 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:31:03.964525  515697 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:31:03.964865  515697 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-443576] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:31:04.188126  515697 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:31:04.188512  515697 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-443576] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:31:04.482811  515697 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:31:04.662329  515697 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:31:04.955727  515697 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:31:04.956022  515697 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:31:05.287476  515697 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:31:05.365374  515697 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:31:05.433818  515697 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:31:05.514997  515697 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:31:06.051405  515697 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:31:06.052170  515697 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:31:06.054826  515697 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:31:06.058107  515697 out.go:252]   - Booting up control plane ...
	I1227 10:31:06.058222  515697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:31:06.058993  515697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:31:06.059823  515697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:31:06.076270  515697 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:31:06.076379  515697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:31:06.083780  515697 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:31:06.084137  515697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:31:06.084184  515697 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:31:06.228851  515697 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:31:06.228977  515697 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:31:07.228164  515697 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001477804s
	I1227 10:31:07.232819  515697 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 10:31:07.232915  515697 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 10:31:07.233223  515697 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 10:31:07.233316  515697 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1227 10:31:06.175500  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	I1227 10:31:08.674830  512231 pod_ready.go:94] pod "coredns-7d764666f9-5p545" is "Ready"
	I1227 10:31:08.674864  512231 pod_ready.go:86] duration metric: took 34.506418584s for pod "coredns-7d764666f9-5p545" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.678036  512231 pod_ready.go:83] waiting for pod "etcd-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.683079  512231 pod_ready.go:94] pod "etcd-no-preload-241090" is "Ready"
	I1227 10:31:08.683105  512231 pod_ready.go:86] duration metric: took 5.037713ms for pod "etcd-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.685337  512231 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.693678  512231 pod_ready.go:94] pod "kube-apiserver-no-preload-241090" is "Ready"
	I1227 10:31:08.693704  512231 pod_ready.go:86] duration metric: took 8.343084ms for pod "kube-apiserver-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.696061  512231 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.872178  512231 pod_ready.go:94] pod "kube-controller-manager-no-preload-241090" is "Ready"
	I1227 10:31:08.872257  512231 pod_ready.go:86] duration metric: took 176.1169ms for pod "kube-controller-manager-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:09.072688  512231 pod_ready.go:83] waiting for pod "kube-proxy-8xv88" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:09.472735  512231 pod_ready.go:94] pod "kube-proxy-8xv88" is "Ready"
	I1227 10:31:09.472780  512231 pod_ready.go:86] duration metric: took 400.067073ms for pod "kube-proxy-8xv88" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:09.673252  512231 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:10.072289  512231 pod_ready.go:94] pod "kube-scheduler-no-preload-241090" is "Ready"
	I1227 10:31:10.072315  512231 pod_ready.go:86] duration metric: took 399.036302ms for pod "kube-scheduler-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:10.072328  512231 pod_ready.go:40] duration metric: took 35.908081379s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:31:10.154141  512231 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:31:10.157185  512231 out.go:203] 
	W1227 10:31:10.160065  512231 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:31:10.162850  512231 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:31:10.165699  512231 out.go:179] * Done! kubectl is now configured to use "no-preload-241090" cluster and "default" namespace by default
	I1227 10:31:09.747505  515697 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.514278837s
	I1227 10:31:10.957891  515697 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.725029005s
	I1227 10:31:12.734297  515697 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501385134s
	I1227 10:31:12.766618  515697 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 10:31:12.781395  515697 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 10:31:12.798518  515697 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 10:31:12.798724  515697 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-443576 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 10:31:12.816666  515697 kubeadm.go:319] [bootstrap-token] Using token: 9unjxw.u0987e039sxivp41
	I1227 10:31:12.819773  515697 out.go:252]   - Configuring RBAC rules ...
	I1227 10:31:12.819915  515697 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 10:31:12.824415  515697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 10:31:12.833307  515697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 10:31:12.841763  515697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 10:31:12.846427  515697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 10:31:12.851137  515697 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 10:31:13.142811  515697 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 10:31:13.574149  515697 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 10:31:14.141060  515697 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 10:31:14.142169  515697 kubeadm.go:319] 
	I1227 10:31:14.142248  515697 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 10:31:14.142253  515697 kubeadm.go:319] 
	I1227 10:31:14.142330  515697 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 10:31:14.142335  515697 kubeadm.go:319] 
	I1227 10:31:14.142360  515697 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 10:31:14.142418  515697 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 10:31:14.142470  515697 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 10:31:14.142474  515697 kubeadm.go:319] 
	I1227 10:31:14.142534  515697 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 10:31:14.142540  515697 kubeadm.go:319] 
	I1227 10:31:14.142588  515697 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 10:31:14.142591  515697 kubeadm.go:319] 
	I1227 10:31:14.142642  515697 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 10:31:14.142718  515697 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 10:31:14.142786  515697 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 10:31:14.142790  515697 kubeadm.go:319] 
	I1227 10:31:14.142874  515697 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 10:31:14.142968  515697 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 10:31:14.142973  515697 kubeadm.go:319] 
	I1227 10:31:14.143062  515697 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9unjxw.u0987e039sxivp41 \
	I1227 10:31:14.143186  515697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 \
	I1227 10:31:14.143208  515697 kubeadm.go:319] 	--control-plane 
	I1227 10:31:14.143212  515697 kubeadm.go:319] 
	I1227 10:31:14.143297  515697 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 10:31:14.143300  515697 kubeadm.go:319] 
	I1227 10:31:14.143383  515697 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9unjxw.u0987e039sxivp41 \
	I1227 10:31:14.143485  515697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 
	I1227 10:31:14.148379  515697 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:31:14.148810  515697 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:31:14.148923  515697 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:31:14.148944  515697 cni.go:84] Creating CNI manager for ""
	I1227 10:31:14.148952  515697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:31:14.152324  515697 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 10:31:14.155359  515697 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 10:31:14.159782  515697 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 10:31:14.159802  515697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 10:31:14.176175  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 10:31:14.505318  515697 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 10:31:14.505474  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:14.505476  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-443576 minikube.k8s.io/updated_at=2025_12_27T10_31_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=newest-cni-443576 minikube.k8s.io/primary=true
	I1227 10:31:14.671145  515697 ops.go:34] apiserver oom_adj: -16
	I1227 10:31:14.671266  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:15.171388  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:15.671701  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:16.171436  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:16.671693  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:17.172328  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:17.671871  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:18.172062  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:18.672045  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:18.792460  515697 kubeadm.go:1114] duration metric: took 4.287061597s to wait for elevateKubeSystemPrivileges
	I1227 10:31:18.792489  515697 kubeadm.go:403] duration metric: took 16.447445382s to StartCluster
	I1227 10:31:18.792508  515697 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:18.792572  515697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:31:18.793491  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:18.793728  515697 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:31:18.793814  515697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:31:18.794075  515697 config.go:182] Loaded profile config "newest-cni-443576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:31:18.794110  515697 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:31:18.794166  515697 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-443576"
	I1227 10:31:18.794183  515697 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-443576"
	I1227 10:31:18.794224  515697 host.go:66] Checking if "newest-cni-443576" exists ...
	I1227 10:31:18.795056  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:31:18.795211  515697 addons.go:70] Setting default-storageclass=true in profile "newest-cni-443576"
	I1227 10:31:18.795227  515697 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-443576"
	I1227 10:31:18.795467  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:31:18.800083  515697 out.go:179] * Verifying Kubernetes components...
	I1227 10:31:18.803148  515697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:31:18.838031  515697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:31:18.840473  515697 addons.go:239] Setting addon default-storageclass=true in "newest-cni-443576"
	I1227 10:31:18.840516  515697 host.go:66] Checking if "newest-cni-443576" exists ...
	I1227 10:31:18.840992  515697 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:31:18.841008  515697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:31:18.841062  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:31:18.843758  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:31:18.876080  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:31:18.882393  515697 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:31:18.882414  515697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:31:18.882477  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:31:18.916112  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:31:19.157173  515697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:31:19.171104  515697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:31:19.171233  515697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:31:19.187596  515697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:31:20.100778  515697 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:31:20.100898  515697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:31:20.101062  515697 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 10:31:20.139014  515697 api_server.go:72] duration metric: took 1.345254447s to wait for apiserver process to appear ...
	I1227 10:31:20.139039  515697 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:31:20.139062  515697 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:31:20.153115  515697 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:31:20.158296  515697 api_server.go:141] control plane version: v1.35.0
	I1227 10:31:20.158377  515697 api_server.go:131] duration metric: took 19.330723ms to wait for apiserver health ...
	I1227 10:31:20.158402  515697 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:31:20.179855  515697 system_pods.go:59] 9 kube-system pods found
	I1227 10:31:20.180045  515697 system_pods.go:61] "coredns-7d764666f9-kndw2" [92c4f590-9ed1-4e08-a1d5-6e21b6bd13ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:31:20.180083  515697 system_pods.go:61] "coredns-7d764666f9-w5pw2" [90fb0105-0460-434f-9823-e2a713de5c12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:31:20.180092  515697 system_pods.go:61] "etcd-newest-cni-443576" [7748ef5b-d12d-4ab3-84cc-bcbfff673ff6] Running
	I1227 10:31:20.180100  515697 system_pods.go:61] "kindnet-5d2fh" [50656616-4132-47e7-a39a-86fcb9ca8a73] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:31:20.180105  515697 system_pods.go:61] "kube-apiserver-newest-cni-443576" [8edbae58-8cd9-4086-81f0-1f57bea869e4] Running
	I1227 10:31:20.180148  515697 system_pods.go:61] "kube-controller-manager-newest-cni-443576" [8dbcddda-24bc-492b-8638-fe0d2d40c827] Running
	I1227 10:31:20.180156  515697 system_pods.go:61] "kube-proxy-xj5vc" [dda1f65f-8d91-4868-a723-87bf2ec5bef8] Running
	I1227 10:31:20.180161  515697 system_pods.go:61] "kube-scheduler-newest-cni-443576" [12ee9bd3-c8e6-4d60-97f5-a74b7ad7fe94] Running
	I1227 10:31:20.180167  515697 system_pods.go:61] "storage-provisioner" [ddc68cbe-819e-4a44-a2da-fd69d485cde1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:31:20.180174  515697 system_pods.go:74] duration metric: took 21.751047ms to wait for pod list to return data ...
	I1227 10:31:20.180183  515697 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:31:20.181324  515697 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 10:31:20.184590  515697 addons.go:530] duration metric: took 1.390469233s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 10:31:20.203087  515697 default_sa.go:45] found service account: "default"
	I1227 10:31:20.203110  515697 default_sa.go:55] duration metric: took 22.921784ms for default service account to be created ...
	I1227 10:31:20.203124  515697 kubeadm.go:587] duration metric: took 1.40937244s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:31:20.203140  515697 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:31:20.220017  515697 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:31:20.220048  515697 node_conditions.go:123] node cpu capacity is 2
	I1227 10:31:20.220114  515697 node_conditions.go:105] duration metric: took 16.968148ms to run NodePressure ...
	I1227 10:31:20.220128  515697 start.go:242] waiting for startup goroutines ...
	I1227 10:31:20.605242  515697 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-443576" context rescaled to 1 replicas
	I1227 10:31:20.605321  515697 start.go:247] waiting for cluster config update ...
	I1227 10:31:20.605348  515697 start.go:256] writing updated cluster config ...
	I1227 10:31:20.605675  515697 ssh_runner.go:195] Run: rm -f paused
	I1227 10:31:20.673734  515697 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:31:20.677225  515697 out.go:203] 
	W1227 10:31:20.680112  515697 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:31:20.683055  515697 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:31:20.686036  515697 out.go:179] * Done! kubectl is now configured to use "newest-cni-443576" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.324192104Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-xj5vc/POD" id=aa9a7e56-0b35-499f-aee9-7b2d947e99b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.325700723Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.332463474Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=2a59fa01-baca-4651-b337-12c5b5b44cc6 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.355824853Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.371755829Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=aa9a7e56-0b35-499f-aee9-7b2d947e99b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.37808226Z" level=info msg="Ran pod sandbox 4376f24d8b9bfb90eb2caa90a9682127d8802bb203efd80530f089593d565f9f with infra container: kube-system/kube-proxy-xj5vc/POD" id=aa9a7e56-0b35-499f-aee9-7b2d947e99b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.379644754Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=81b2de2d-f625-4aa6-b1a8-75451a753160 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.38871573Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=86088c55-75b8-4408-a2a8-4986684d26b3 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.395106236Z" level=info msg="Creating container: kube-system/kube-proxy-xj5vc/kube-proxy" id=5b647954-4ef0-45f0-92e3-4677c0b88991 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.395222636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.403641888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.405061513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.477993906Z" level=info msg="Created container 5b92b870de06130d8d8d4df590a12b6d40816014fea686daf81a58ad6a1181ef: kube-system/kube-proxy-xj5vc/kube-proxy" id=5b647954-4ef0-45f0-92e3-4677c0b88991 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.479041703Z" level=info msg="Starting container: 5b92b870de06130d8d8d4df590a12b6d40816014fea686daf81a58ad6a1181ef" id=ad462308-42da-4e63-bd05-ce1039e4ef1f name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:31:19 newest-cni-443576 crio[836]: time="2025-12-27T10:31:19.488349761Z" level=info msg="Started container" PID=1469 containerID=5b92b870de06130d8d8d4df590a12b6d40816014fea686daf81a58ad6a1181ef description=kube-system/kube-proxy-xj5vc/kube-proxy id=ad462308-42da-4e63-bd05-ce1039e4ef1f name=/runtime.v1.RuntimeService/StartContainer sandboxID=4376f24d8b9bfb90eb2caa90a9682127d8802bb203efd80530f089593d565f9f
	Dec 27 10:31:22 newest-cni-443576 crio[836]: time="2025-12-27T10:31:22.122524053Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3" id=2a59fa01-baca-4651-b337-12c5b5b44cc6 name=/runtime.v1.ImageService/PullImage
	Dec 27 10:31:22 newest-cni-443576 crio[836]: time="2025-12-27T10:31:22.126393984Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=f32e85ca-da46-4288-9dea-3171e161ec54 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:22 newest-cni-443576 crio[836]: time="2025-12-27T10:31:22.134606071Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b753154d-1514-472b-8ba0-8d59db4f4324 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:22 newest-cni-443576 crio[836]: time="2025-12-27T10:31:22.141084439Z" level=info msg="Creating container: kube-system/kindnet-5d2fh/kindnet-cni" id=38fdda4b-82e6-46be-9a7e-33af3b10562f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:22 newest-cni-443576 crio[836]: time="2025-12-27T10:31:22.141196514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:22 newest-cni-443576 crio[836]: time="2025-12-27T10:31:22.152345845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:22 newest-cni-443576 crio[836]: time="2025-12-27T10:31:22.155002086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:22 newest-cni-443576 crio[836]: time="2025-12-27T10:31:22.17297186Z" level=info msg="Created container 05cd4dc52c677c8686dce244e29ce61c2f1c509d9e45a39323ad28b03b3780a9: kube-system/kindnet-5d2fh/kindnet-cni" id=38fdda4b-82e6-46be-9a7e-33af3b10562f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:22 newest-cni-443576 crio[836]: time="2025-12-27T10:31:22.174864146Z" level=info msg="Starting container: 05cd4dc52c677c8686dce244e29ce61c2f1c509d9e45a39323ad28b03b3780a9" id=3fbf4d09-68e7-4432-8028-5fb1ab3891ba name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:31:22 newest-cni-443576 crio[836]: time="2025-12-27T10:31:22.182200065Z" level=info msg="Started container" PID=1716 containerID=05cd4dc52c677c8686dce244e29ce61c2f1c509d9e45a39323ad28b03b3780a9 description=kube-system/kindnet-5d2fh/kindnet-cni id=3fbf4d09-68e7-4432-8028-5fb1ab3891ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=a036fc7134e082cec08933dd14121fda6d8421cce304f5f8668776e0572d0afe
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	05cd4dc52c677       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   Less than a second ago   Running             kindnet-cni               0                   a036fc7134e08       kindnet-5d2fh                               kube-system
	5b92b870de061       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     3 seconds ago            Running             kube-proxy                0                   4376f24d8b9bf       kube-proxy-xj5vc                            kube-system
	49307029c040d       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     15 seconds ago           Running             kube-controller-manager   0                   610d840913829       kube-controller-manager-newest-cni-443576   kube-system
	227dc136d4d12       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     15 seconds ago           Running             etcd                      0                   38c397330ea6f       etcd-newest-cni-443576                      kube-system
	127b3eac39b08       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     15 seconds ago           Running             kube-scheduler            0                   efa2aae18585e       kube-scheduler-newest-cni-443576            kube-system
	ac77eb4c6104f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     15 seconds ago           Running             kube-apiserver            0                   ce44c63ac485c       kube-apiserver-newest-cni-443576            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-443576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-443576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=newest-cni-443576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_31_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:31:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-443576
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:31:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:31:13 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:31:13 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:31:13 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 10:31:13 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-443576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                469dccd3-aab7-4ad3-8e7d-e13b529d966f
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-443576                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-5d2fh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-443576             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-controller-manager-newest-cni-443576    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-xj5vc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-443576             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-443576 event: Registered Node newest-cni-443576 in Controller
	
	
	==> dmesg <==
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	[Dec27 10:28] overlayfs: idmapped layers are currently not supported
	[Dec27 10:29] overlayfs: idmapped layers are currently not supported
	[ +34.978626] overlayfs: idmapped layers are currently not supported
	[Dec27 10:30] overlayfs: idmapped layers are currently not supported
	[Dec27 10:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [227dc136d4d12907e2332dd9a402f8632bcc9b801a53ceffd60dac7546f6ed5c] <==
	{"level":"info","ts":"2025-12-27T10:31:07.637057Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:31:07.697783Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-27T10:31:07.697947Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-27T10:31:07.698048Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-27T10:31:07.698101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:31:07.698152Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:31:07.700008Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:31:07.700080Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:31:07.700121Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-27T10:31:07.700168Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:31:07.702484Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-443576 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:31:07.702837Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:31:07.704227Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:31:07.704404Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:31:07.704512Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:31:07.704561Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:31:07.706050Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:31:07.706214Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:31:07.706283Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-27T10:31:07.706345Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-27T10:31:07.706439Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-27T10:31:07.707245Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:31:07.709184Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:31:07.740634Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:31:07.757044Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 10:31:23 up  2:13,  0 user,  load average: 4.51, 3.01, 2.34
	Linux newest-cni-443576 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [05cd4dc52c677c8686dce244e29ce61c2f1c509d9e45a39323ad28b03b3780a9] <==
	I1227 10:31:22.317706       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:31:22.317965       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:31:22.318086       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:31:22.318110       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:31:22.318124       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:31:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:31:22.525272       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:31:22.525301       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:31:22.525323       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:31:22.525471       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1227 10:31:22.826057       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:31:22.826150       1 metrics.go:72] Registering metrics
	I1227 10:31:22.826240       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [ac77eb4c6104fbc8e5fb75223f3fc1e07278166f990e3231586de9600d750152] <==
	I1227 10:31:11.006920       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:11.007242       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 10:31:11.007263       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:31:11.023537       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:31:11.024127       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:31:11.049657       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:31:11.049755       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1227 10:31:11.185721       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:31:11.605070       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1227 10:31:11.611253       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1227 10:31:11.611281       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:31:12.362699       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:31:12.417789       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:31:12.532259       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1227 10:31:12.539660       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1227 10:31:12.540898       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:31:12.545925       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:31:12.845230       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:31:13.555927       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:31:13.572500       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1227 10:31:13.585299       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1227 10:31:18.400963       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:31:18.410937       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:31:18.499431       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:31:18.890232       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [49307029c040d500fbccdffa8e6a0885e516d284c130d96fd448b304785f9943] <==
	I1227 10:31:17.668099       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.668172       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.668612       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.669698       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.670374       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.671417       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.671505       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.671596       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1227 10:31:17.671688       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-443576"
	I1227 10:31:17.671780       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1227 10:31:17.674182       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.674489       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.674535       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.674637       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.679097       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.679362       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.680593       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.680662       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.680694       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.689300       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.707325       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-443576" podCIDRs=["10.42.0.0/24"]
	I1227 10:31:17.759154       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.768503       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:17.768535       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:31:17.768545       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [5b92b870de06130d8d8d4df590a12b6d40816014fea686daf81a58ad6a1181ef] <==
	I1227 10:31:19.592790       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:31:19.713719       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:31:19.814039       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:19.814068       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:31:19.814227       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:31:19.850329       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:31:19.850388       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:31:19.859466       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:31:19.859782       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:31:19.859806       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:31:19.873207       1 config.go:200] "Starting service config controller"
	I1227 10:31:19.873232       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:31:19.873252       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:31:19.873256       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:31:19.873284       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:31:19.873289       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:31:19.874052       1 config.go:309] "Starting node config controller"
	I1227 10:31:19.874062       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:31:19.874068       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:31:19.974275       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:31:19.974310       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:31:19.974335       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [127b3eac39b084c0bb246ad709c4ee6e2a8b851c92cb5ee6ed43c7467951eae8] <==
	E1227 10:31:10.956548       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E1227 10:31:10.963559       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1227 10:31:10.965347       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1227 10:31:10.971883       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1227 10:31:10.971883       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1227 10:31:10.971943       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1227 10:31:10.972047       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1227 10:31:10.972082       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:31:10.972089       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:31:10.972173       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1227 10:31:10.972213       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1227 10:31:10.972253       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1227 10:31:10.972262       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1227 10:31:10.972300       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1227 10:31:10.972331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1227 10:31:10.972352       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:31:10.972536       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1227 10:31:10.972610       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:31:10.972658       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1227 10:31:11.844819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1227 10:31:11.925098       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1227 10:31:12.087817       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1227 10:31:12.109735       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1227 10:31:12.249492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I1227 10:31:14.224057       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:31:15 newest-cni-443576 kubelet[1288]: E1227 10:31:15.603599    1288 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-443576" containerName="kube-scheduler"
	Dec 27 10:31:15 newest-cni-443576 kubelet[1288]: E1227 10:31:15.603785    1288 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-443576" containerName="kube-apiserver"
	Dec 27 10:31:15 newest-cni-443576 kubelet[1288]: I1227 10:31:15.718415    1288 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-443576" podStartSLOduration=2.718397148 podStartE2EDuration="2.718397148s" podCreationTimestamp="2025-12-27 10:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:31:15.689149605 +0000 UTC m=+2.317422662" watchObservedRunningTime="2025-12-27 10:31:15.718397148 +0000 UTC m=+2.346670197"
	Dec 27 10:31:15 newest-cni-443576 kubelet[1288]: I1227 10:31:15.731573    1288 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-443576" podStartSLOduration=2.731550324 podStartE2EDuration="2.731550324s" podCreationTimestamp="2025-12-27 10:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:31:15.719114808 +0000 UTC m=+2.347387874" watchObservedRunningTime="2025-12-27 10:31:15.731550324 +0000 UTC m=+2.359823381"
	Dec 27 10:31:15 newest-cni-443576 kubelet[1288]: I1227 10:31:15.745475    1288 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-443576" podStartSLOduration=3.745457345 podStartE2EDuration="3.745457345s" podCreationTimestamp="2025-12-27 10:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:31:15.732030821 +0000 UTC m=+2.360303878" watchObservedRunningTime="2025-12-27 10:31:15.745457345 +0000 UTC m=+2.373730410"
	Dec 27 10:31:16 newest-cni-443576 kubelet[1288]: E1227 10:31:16.604863    1288 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-443576" containerName="kube-scheduler"
	Dec 27 10:31:17 newest-cni-443576 kubelet[1288]: I1227 10:31:17.725694    1288 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-443576" podStartSLOduration=4.725672116 podStartE2EDuration="4.725672116s" podCreationTimestamp="2025-12-27 10:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:31:15.746339372 +0000 UTC m=+2.374612429" watchObservedRunningTime="2025-12-27 10:31:17.725672116 +0000 UTC m=+4.353945165"
	Dec 27 10:31:17 newest-cni-443576 kubelet[1288]: I1227 10:31:17.753944    1288 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 10:31:17 newest-cni-443576 kubelet[1288]: I1227 10:31:17.754767    1288 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: I1227 10:31:19.022479    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50656616-4132-47e7-a39a-86fcb9ca8a73-xtables-lock\") pod \"kindnet-5d2fh\" (UID: \"50656616-4132-47e7-a39a-86fcb9ca8a73\") " pod="kube-system/kindnet-5d2fh"
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: I1227 10:31:19.022526    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50656616-4132-47e7-a39a-86fcb9ca8a73-lib-modules\") pod \"kindnet-5d2fh\" (UID: \"50656616-4132-47e7-a39a-86fcb9ca8a73\") " pod="kube-system/kindnet-5d2fh"
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: I1227 10:31:19.022551    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txd82\" (UniqueName: \"kubernetes.io/projected/50656616-4132-47e7-a39a-86fcb9ca8a73-kube-api-access-txd82\") pod \"kindnet-5d2fh\" (UID: \"50656616-4132-47e7-a39a-86fcb9ca8a73\") " pod="kube-system/kindnet-5d2fh"
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: I1227 10:31:19.022572    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/50656616-4132-47e7-a39a-86fcb9ca8a73-cni-cfg\") pod \"kindnet-5d2fh\" (UID: \"50656616-4132-47e7-a39a-86fcb9ca8a73\") " pod="kube-system/kindnet-5d2fh"
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: I1227 10:31:19.022591    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nhvm\" (UniqueName: \"kubernetes.io/projected/dda1f65f-8d91-4868-a723-87bf2ec5bef8-kube-api-access-7nhvm\") pod \"kube-proxy-xj5vc\" (UID: \"dda1f65f-8d91-4868-a723-87bf2ec5bef8\") " pod="kube-system/kube-proxy-xj5vc"
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: I1227 10:31:19.022608    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dda1f65f-8d91-4868-a723-87bf2ec5bef8-kube-proxy\") pod \"kube-proxy-xj5vc\" (UID: \"dda1f65f-8d91-4868-a723-87bf2ec5bef8\") " pod="kube-system/kube-proxy-xj5vc"
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: I1227 10:31:19.022625    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dda1f65f-8d91-4868-a723-87bf2ec5bef8-lib-modules\") pod \"kube-proxy-xj5vc\" (UID: \"dda1f65f-8d91-4868-a723-87bf2ec5bef8\") " pod="kube-system/kube-proxy-xj5vc"
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: I1227 10:31:19.022645    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dda1f65f-8d91-4868-a723-87bf2ec5bef8-xtables-lock\") pod \"kube-proxy-xj5vc\" (UID: \"dda1f65f-8d91-4868-a723-87bf2ec5bef8\") " pod="kube-system/kube-proxy-xj5vc"
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: I1227 10:31:19.224303    1288 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: W1227 10:31:19.313023    1288 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/crio-a036fc7134e082cec08933dd14121fda6d8421cce304f5f8668776e0572d0afe WatchSource:0}: Error finding container a036fc7134e082cec08933dd14121fda6d8421cce304f5f8668776e0572d0afe: Status 404 returned error can't find the container with id a036fc7134e082cec08933dd14121fda6d8421cce304f5f8668776e0572d0afe
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: W1227 10:31:19.375057    1288 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/crio-4376f24d8b9bfb90eb2caa90a9682127d8802bb203efd80530f089593d565f9f WatchSource:0}: Error finding container 4376f24d8b9bfb90eb2caa90a9682127d8802bb203efd80530f089593d565f9f: Status 404 returned error can't find the container with id 4376f24d8b9bfb90eb2caa90a9682127d8802bb203efd80530f089593d565f9f
	Dec 27 10:31:19 newest-cni-443576 kubelet[1288]: I1227 10:31:19.642495    1288 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-xj5vc" podStartSLOduration=1.642477966 podStartE2EDuration="1.642477966s" podCreationTimestamp="2025-12-27 10:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-27 10:31:19.64209543 +0000 UTC m=+6.270368487" watchObservedRunningTime="2025-12-27 10:31:19.642477966 +0000 UTC m=+6.270751014"
	Dec 27 10:31:21 newest-cni-443576 kubelet[1288]: E1227 10:31:21.975342    1288 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-443576" containerName="etcd"
	Dec 27 10:31:22 newest-cni-443576 kubelet[1288]: E1227 10:31:22.188950    1288 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-443576" containerName="kube-controller-manager"
	Dec 27 10:31:22 newest-cni-443576 kubelet[1288]: E1227 10:31:22.389109    1288 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-443576" containerName="kube-scheduler"
	Dec 27 10:31:22 newest-cni-443576 kubelet[1288]: E1227 10:31:22.496075    1288 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-443576" containerName="kube-apiserver"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-443576 -n newest-cni-443576
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-443576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-w5pw2 storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-443576 describe pod coredns-7d764666f9-w5pw2 storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-443576 describe pod coredns-7d764666f9-w5pw2 storage-provisioner: exit status 1 (92.484846ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-w5pw2" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-443576 describe pod coredns-7d764666f9-w5pw2 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-241090 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-241090 --alsologtostderr -v=1: exit status 80 (2.235058523s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-241090 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:31:22.357908  518558 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:31:22.358120  518558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:31:22.358157  518558 out.go:374] Setting ErrFile to fd 2...
	I1227 10:31:22.358177  518558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:31:22.362542  518558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:31:22.363022  518558 out.go:368] Setting JSON to false
	I1227 10:31:22.363076  518558 mustload.go:66] Loading cluster: no-preload-241090
	I1227 10:31:22.363533  518558 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:31:22.364113  518558 cli_runner.go:164] Run: docker container inspect no-preload-241090 --format={{.State.Status}}
	I1227 10:31:22.390744  518558 host.go:66] Checking if "no-preload-241090" exists ...
	I1227 10:31:22.391102  518558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:31:22.501971  518558 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 10:31:22.489232692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:31:22.502606  518558 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-241090 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:31:22.506552  518558 out.go:179] * Pausing node no-preload-241090 ... 
	I1227 10:31:22.509660  518558 host.go:66] Checking if "no-preload-241090" exists ...
	I1227 10:31:22.510124  518558 ssh_runner.go:195] Run: systemctl --version
	I1227 10:31:22.510186  518558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-241090
	I1227 10:31:22.534040  518558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/no-preload-241090/id_rsa Username:docker}
	I1227 10:31:22.644704  518558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:31:22.691429  518558 pause.go:52] kubelet running: true
	I1227 10:31:22.691506  518558 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:31:23.043534  518558 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:31:23.043623  518558 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:31:23.134447  518558 cri.go:96] found id: "f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d"
	I1227 10:31:23.134478  518558 cri.go:96] found id: "b0b9d5cfc0f2dea1b46d3d2fc69780ae4cc75bc974985fea86b7ba9c545a6bb0"
	I1227 10:31:23.134484  518558 cri.go:96] found id: "72ffaa9a1bdb7fd593a1ecea6b92e25f4d4fa7299fed5ff41307fd00c3e24018"
	I1227 10:31:23.134489  518558 cri.go:96] found id: "ec04e59ae6c952c8927f173f6ba8de9972a72aeb5ef19a32d5625b317dc3d76e"
	I1227 10:31:23.134492  518558 cri.go:96] found id: "c80de0856a7e97dcdca435688c0cce0be0c03d163eb8a2f5a8dcb13ec35e129d"
	I1227 10:31:23.134496  518558 cri.go:96] found id: "0be2bd393e285cb49c8e5b5f66063ce6781e934558ad30c47aa3aec488565ab9"
	I1227 10:31:23.134499  518558 cri.go:96] found id: "5ef714a1055a6cf93a2f1f0f649e4d4fa6f789af9150c2755a1c2d09b53037b1"
	I1227 10:31:23.134501  518558 cri.go:96] found id: "4264015374f91b531af599acfc367aa072b442eccc1ffead423255914a0d9f09"
	I1227 10:31:23.134504  518558 cri.go:96] found id: "96e2bc84c864d4d7cc89f0f2517101b59c5cc5096c04209185554cf59b742f37"
	I1227 10:31:23.134525  518558 cri.go:96] found id: "f85ff0f366cab3121436ea435162a86c87b97787dc26bbbc8b0dd95316f338c4"
	I1227 10:31:23.134537  518558 cri.go:96] found id: "d6dac70955d6676da0bde6507fddc874505c8c28166dd15dd89c7d34aac1b578"
	I1227 10:31:23.134540  518558 cri.go:96] found id: ""
	I1227 10:31:23.134599  518558 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:31:23.149150  518558 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:31:23Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:31:23.292539  518558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:31:23.308934  518558 pause.go:52] kubelet running: false
	I1227 10:31:23.309083  518558 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:31:23.554122  518558 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:31:23.554202  518558 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:31:23.683576  518558 cri.go:96] found id: "f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d"
	I1227 10:31:23.683597  518558 cri.go:96] found id: "b0b9d5cfc0f2dea1b46d3d2fc69780ae4cc75bc974985fea86b7ba9c545a6bb0"
	I1227 10:31:23.683602  518558 cri.go:96] found id: "72ffaa9a1bdb7fd593a1ecea6b92e25f4d4fa7299fed5ff41307fd00c3e24018"
	I1227 10:31:23.683606  518558 cri.go:96] found id: "ec04e59ae6c952c8927f173f6ba8de9972a72aeb5ef19a32d5625b317dc3d76e"
	I1227 10:31:23.683609  518558 cri.go:96] found id: "c80de0856a7e97dcdca435688c0cce0be0c03d163eb8a2f5a8dcb13ec35e129d"
	I1227 10:31:23.683612  518558 cri.go:96] found id: "0be2bd393e285cb49c8e5b5f66063ce6781e934558ad30c47aa3aec488565ab9"
	I1227 10:31:23.683615  518558 cri.go:96] found id: "5ef714a1055a6cf93a2f1f0f649e4d4fa6f789af9150c2755a1c2d09b53037b1"
	I1227 10:31:23.683618  518558 cri.go:96] found id: "4264015374f91b531af599acfc367aa072b442eccc1ffead423255914a0d9f09"
	I1227 10:31:23.683621  518558 cri.go:96] found id: "96e2bc84c864d4d7cc89f0f2517101b59c5cc5096c04209185554cf59b742f37"
	I1227 10:31:23.683627  518558 cri.go:96] found id: "f85ff0f366cab3121436ea435162a86c87b97787dc26bbbc8b0dd95316f338c4"
	I1227 10:31:23.683631  518558 cri.go:96] found id: "d6dac70955d6676da0bde6507fddc874505c8c28166dd15dd89c7d34aac1b578"
	I1227 10:31:23.683633  518558 cri.go:96] found id: ""
	I1227 10:31:23.683684  518558 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:31:24.111994  518558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:31:24.126272  518558 pause.go:52] kubelet running: false
	I1227 10:31:24.126332  518558 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:31:24.349560  518558 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:31:24.349636  518558 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:31:24.461470  518558 cri.go:96] found id: "f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d"
	I1227 10:31:24.461491  518558 cri.go:96] found id: "b0b9d5cfc0f2dea1b46d3d2fc69780ae4cc75bc974985fea86b7ba9c545a6bb0"
	I1227 10:31:24.461495  518558 cri.go:96] found id: "72ffaa9a1bdb7fd593a1ecea6b92e25f4d4fa7299fed5ff41307fd00c3e24018"
	I1227 10:31:24.461499  518558 cri.go:96] found id: "ec04e59ae6c952c8927f173f6ba8de9972a72aeb5ef19a32d5625b317dc3d76e"
	I1227 10:31:24.461502  518558 cri.go:96] found id: "c80de0856a7e97dcdca435688c0cce0be0c03d163eb8a2f5a8dcb13ec35e129d"
	I1227 10:31:24.461505  518558 cri.go:96] found id: "0be2bd393e285cb49c8e5b5f66063ce6781e934558ad30c47aa3aec488565ab9"
	I1227 10:31:24.461508  518558 cri.go:96] found id: "5ef714a1055a6cf93a2f1f0f649e4d4fa6f789af9150c2755a1c2d09b53037b1"
	I1227 10:31:24.461511  518558 cri.go:96] found id: "4264015374f91b531af599acfc367aa072b442eccc1ffead423255914a0d9f09"
	I1227 10:31:24.461515  518558 cri.go:96] found id: "96e2bc84c864d4d7cc89f0f2517101b59c5cc5096c04209185554cf59b742f37"
	I1227 10:31:24.461520  518558 cri.go:96] found id: "f85ff0f366cab3121436ea435162a86c87b97787dc26bbbc8b0dd95316f338c4"
	I1227 10:31:24.461524  518558 cri.go:96] found id: "d6dac70955d6676da0bde6507fddc874505c8c28166dd15dd89c7d34aac1b578"
	I1227 10:31:24.461527  518558 cri.go:96] found id: ""
	I1227 10:31:24.461579  518558 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:31:24.477168  518558 out.go:203] 
	W1227 10:31:24.480085  518558 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:31:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:31:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:31:24.480102  518558 out.go:285] * 
	* 
	W1227 10:31:24.482611  518558 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:31:24.485551  518558 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-241090 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-241090
helpers_test.go:244: (dbg) docker inspect no-preload-241090:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a",
	        "Created": "2025-12-27T10:28:59.433064249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 512359,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:30:20.810719909Z",
	            "FinishedAt": "2025-12-27T10:30:20.021260993Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/hosts",
	        "LogPath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a-json.log",
	        "Name": "/no-preload-241090",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-241090:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-241090",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a",
	                "LowerDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-241090",
	                "Source": "/var/lib/docker/volumes/no-preload-241090/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-241090",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-241090",
	                "name.minikube.sigs.k8s.io": "no-preload-241090",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9ec164e021771abb81a3fbf99651f22c7d209aacc71f567c09b090d915edeec",
	            "SandboxKey": "/var/run/docker/netns/f9ec164e0217",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-241090": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:97:06:cc:93:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8d3a00ff7095640f7433799c8c32b498081b342b7c8dd02f4d6cb45f97d8125",
	                    "EndpointID": "ddf1a53e82b251e6c894bc500cc7d8cd8bb1209cdf4e33558208ce75e9a0b147",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-241090",
	                        "f3d580a4684b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241090 -n no-preload-241090
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241090 -n no-preload-241090: exit status 2 (433.108075ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241090 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-241090 logs -n 25: (1.763717528s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-784377 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ ssh     │ force-systemd-flag-915850 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p force-systemd-flag-915850                                                                                                                                                                                                                  │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p disable-driver-mounts-913868                                                                                                                                                                                                               │ disable-driver-mounts-913868 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable metrics-server -p embed-certs-367691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │                     │
	│ stop    │ -p embed-certs-367691 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable dashboard -p embed-certs-367691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable metrics-server -p no-preload-241090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ stop    │ -p no-preload-241090 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable dashboard -p no-preload-241090 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:31 UTC │
	│ image   │ embed-certs-367691 image list --format=json                                                                                                                                                                                                   │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ pause   │ -p embed-certs-367691 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ delete  │ -p embed-certs-367691                                                                                                                                                                                                                         │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ delete  │ -p embed-certs-367691                                                                                                                                                                                                                         │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-443576            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:31 UTC │
	│ addons  │ enable metrics-server -p newest-cni-443576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-443576            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ image   │ no-preload-241090 image list --format=json                                                                                                                                                                                                    │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ pause   │ -p no-preload-241090 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ stop    │ -p newest-cni-443576 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-443576            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:30:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:30:47.739844  515697 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:30:47.740038  515697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:30:47.740047  515697 out.go:374] Setting ErrFile to fd 2...
	I1227 10:30:47.740054  515697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:30:47.740332  515697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:30:47.740794  515697 out.go:368] Setting JSON to false
	I1227 10:30:47.741857  515697 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8001,"bootTime":1766823447,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:30:47.741939  515697 start.go:143] virtualization:  
	I1227 10:30:47.747727  515697 out.go:179] * [newest-cni-443576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:30:47.751829  515697 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:30:47.751932  515697 notify.go:221] Checking for updates...
	I1227 10:30:47.759056  515697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:30:47.762561  515697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:30:47.766284  515697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:30:47.769538  515697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:30:47.772732  515697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:30:47.776500  515697 config.go:182] Loaded profile config "no-preload-241090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:47.776596  515697 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:30:47.810586  515697 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:30:47.810719  515697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:30:47.909542  515697 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:30:47.897856362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:30:47.909652  515697 docker.go:319] overlay module found
	I1227 10:30:47.913030  515697 out.go:179] * Using the docker driver based on user configuration
	I1227 10:30:47.916261  515697 start.go:309] selected driver: docker
	I1227 10:30:47.916309  515697 start.go:928] validating driver "docker" against <nil>
	I1227 10:30:47.916333  515697 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:30:47.917340  515697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:30:48.044579  515697 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 10:30:48.032466615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:30:48.044739  515697 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1227 10:30:48.044764  515697 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1227 10:30:48.045079  515697 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:30:48.049627  515697 out.go:179] * Using Docker driver with root privileges
	I1227 10:30:48.054005  515697 cni.go:84] Creating CNI manager for ""
	I1227 10:30:48.054098  515697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:30:48.054108  515697 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:30:48.054187  515697 start.go:353] cluster config:
	{Name:newest-cni-443576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:30:48.057640  515697 out.go:179] * Starting "newest-cni-443576" primary control-plane node in "newest-cni-443576" cluster
	I1227 10:30:48.061802  515697 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:30:48.065153  515697 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:30:48.068050  515697 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:30:48.068103  515697 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:30:48.068133  515697 cache.go:65] Caching tarball of preloaded images
	I1227 10:30:48.068254  515697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:30:48.068576  515697 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:30:48.068591  515697 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:30:48.068721  515697 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/config.json ...
	I1227 10:30:48.068741  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/config.json: {Name:mk1f39da38d1a500495171d6f6e58e129f2d3616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:30:48.091008  515697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:30:48.091030  515697 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:30:48.091045  515697 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:30:48.091075  515697 start.go:360] acquireMachinesLock for newest-cni-443576: {Name:mka565ad41fecac1e9f8cd8d651491fd96f86258 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:30:48.091181  515697 start.go:364] duration metric: took 88.419µs to acquireMachinesLock for "newest-cni-443576"
	I1227 10:30:48.091206  515697 start.go:93] Provisioning new machine with config: &{Name:newest-cni-443576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:30:48.091278  515697 start.go:125] createHost starting for "" (driver="docker")
	W1227 10:30:45.676890  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:30:48.195140  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	I1227 10:30:48.095956  515697 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:30:48.096246  515697 start.go:159] libmachine.API.Create for "newest-cni-443576" (driver="docker")
	I1227 10:30:48.096288  515697 client.go:173] LocalClient.Create starting
	I1227 10:30:48.096368  515697 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem
	I1227 10:30:48.096401  515697 main.go:144] libmachine: Decoding PEM data...
	I1227 10:30:48.096417  515697 main.go:144] libmachine: Parsing certificate...
	I1227 10:30:48.096467  515697 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem
	I1227 10:30:48.096483  515697 main.go:144] libmachine: Decoding PEM data...
	I1227 10:30:48.096494  515697 main.go:144] libmachine: Parsing certificate...
	I1227 10:30:48.096848  515697 cli_runner.go:164] Run: docker network inspect newest-cni-443576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:30:48.114859  515697 cli_runner.go:211] docker network inspect newest-cni-443576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:30:48.114965  515697 network_create.go:284] running [docker network inspect newest-cni-443576] to gather additional debugging logs...
	I1227 10:30:48.114982  515697 cli_runner.go:164] Run: docker network inspect newest-cni-443576
	W1227 10:30:48.144375  515697 cli_runner.go:211] docker network inspect newest-cni-443576 returned with exit code 1
	I1227 10:30:48.144408  515697 network_create.go:287] error running [docker network inspect newest-cni-443576]: docker network inspect newest-cni-443576: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-443576 not found
	I1227 10:30:48.144421  515697 network_create.go:289] output of [docker network inspect newest-cni-443576]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-443576 not found
	
	** /stderr **
	I1227 10:30:48.144529  515697 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:30:48.166328  515697 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
	I1227 10:30:48.166786  515697 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ebae89a2105 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:cb:00:ea:c9:f6} reservation:<nil>}
	I1227 10:30:48.167134  515697 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6847566085e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:12:2b:ec:3f:0a} reservation:<nil>}
	I1227 10:30:48.167635  515697 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a46ac0}
	I1227 10:30:48.167654  515697 network_create.go:124] attempt to create docker network newest-cni-443576 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:30:48.167778  515697 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-443576 newest-cni-443576
	I1227 10:30:48.246484  515697 network_create.go:108] docker network newest-cni-443576 192.168.76.0/24 created
	I1227 10:30:48.246520  515697 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-443576" container
	I1227 10:30:48.246603  515697 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:30:48.265812  515697 cli_runner.go:164] Run: docker volume create newest-cni-443576 --label name.minikube.sigs.k8s.io=newest-cni-443576 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:30:48.283410  515697 oci.go:103] Successfully created a docker volume newest-cni-443576
	I1227 10:30:48.283512  515697 cli_runner.go:164] Run: docker run --rm --name newest-cni-443576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-443576 --entrypoint /usr/bin/test -v newest-cni-443576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:30:49.284088  515697 cli_runner.go:217] Completed: docker run --rm --name newest-cni-443576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-443576 --entrypoint /usr/bin/test -v newest-cni-443576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib: (1.000535306s)
	I1227 10:30:49.284133  515697 oci.go:107] Successfully prepared a docker volume newest-cni-443576
	I1227 10:30:49.284184  515697 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:30:49.284200  515697 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:30:49.284261  515697 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-443576:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	W1227 10:30:50.674556  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:30:53.174786  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	I1227 10:30:53.412991  515697 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-443576:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.128688994s)
	I1227 10:30:53.413027  515697 kic.go:203] duration metric: took 4.128824437s to extract preloaded images to volume ...
	W1227 10:30:53.413183  515697 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:30:53.413299  515697 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:30:53.468731  515697 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-443576 --name newest-cni-443576 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-443576 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-443576 --network newest-cni-443576 --ip 192.168.76.2 --volume newest-cni-443576:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:30:53.782640  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Running}}
	I1227 10:30:53.808691  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:30:53.841457  515697 cli_runner.go:164] Run: docker exec newest-cni-443576 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:30:53.902230  515697 oci.go:144] the created container "newest-cni-443576" has a running status.
	I1227 10:30:53.902280  515697 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa...
	I1227 10:30:54.134165  515697 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:30:54.161635  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:30:54.188686  515697 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:30:54.188713  515697 kic_runner.go:114] Args: [docker exec --privileged newest-cni-443576 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:30:54.270526  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:30:54.293932  515697 machine.go:94] provisionDockerMachine start ...
	I1227 10:30:54.294032  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:54.321731  515697 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:54.322136  515697 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 10:30:54.322153  515697 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:30:54.322764  515697 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60304->127.0.0.1:33448: read: connection reset by peer
	I1227 10:30:57.463600  515697 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-443576
	
	I1227 10:30:57.463624  515697 ubuntu.go:182] provisioning hostname "newest-cni-443576"
	I1227 10:30:57.463697  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:57.481996  515697 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:57.482315  515697 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 10:30:57.482327  515697 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-443576 && echo "newest-cni-443576" | sudo tee /etc/hostname
	I1227 10:30:57.634880  515697 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-443576
	
	I1227 10:30:57.635035  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:57.653698  515697 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:57.654022  515697 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 10:30:57.654038  515697 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-443576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-443576/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-443576' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1227 10:30:55.674144  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:30:57.674399  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:30:59.675440  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	I1227 10:30:57.800296  515697 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:30:57.800325  515697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-297941/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-297941/.minikube}
	I1227 10:30:57.800373  515697 ubuntu.go:190] setting up certificates
	I1227 10:30:57.800392  515697 provision.go:84] configureAuth start
	I1227 10:30:57.800462  515697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-443576
	I1227 10:30:57.821549  515697 provision.go:143] copyHostCerts
	I1227 10:30:57.821650  515697 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem, removing ...
	I1227 10:30:57.821665  515697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem
	I1227 10:30:57.821744  515697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/ca.pem (1082 bytes)
	I1227 10:30:57.821846  515697 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem, removing ...
	I1227 10:30:57.821855  515697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem
	I1227 10:30:57.821885  515697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/cert.pem (1123 bytes)
	I1227 10:30:57.821955  515697 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem, removing ...
	I1227 10:30:57.821963  515697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem
	I1227 10:30:57.821989  515697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-297941/.minikube/key.pem (1675 bytes)
	I1227 10:30:57.822047  515697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem org=jenkins.newest-cni-443576 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-443576]
	I1227 10:30:58.127367  515697 provision.go:177] copyRemoteCerts
	I1227 10:30:58.127494  515697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:30:58.127581  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:58.145682  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:30:58.244197  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:30:58.263945  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:30:58.282694  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:30:58.302309  515697 provision.go:87] duration metric: took 501.893829ms to configureAuth
	I1227 10:30:58.302339  515697 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:30:58.302536  515697 config.go:182] Loaded profile config "newest-cni-443576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:30:58.302645  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:58.321013  515697 main.go:144] libmachine: Using SSH client type: native
	I1227 10:30:58.321332  515697 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1227 10:30:58.321355  515697 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1227 10:30:58.696058  515697 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1227 10:30:58.696084  515697 machine.go:97] duration metric: took 4.402132533s to provisionDockerMachine
	I1227 10:30:58.696096  515697 client.go:176] duration metric: took 10.599800622s to LocalClient.Create
	I1227 10:30:58.696109  515697 start.go:167] duration metric: took 10.5998655s to libmachine.API.Create "newest-cni-443576"
	I1227 10:30:58.696116  515697 start.go:293] postStartSetup for "newest-cni-443576" (driver="docker")
	I1227 10:30:58.696126  515697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:30:58.696191  515697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:30:58.696243  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:58.713599  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:30:58.822674  515697 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:30:58.826281  515697 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:30:58.826308  515697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:30:58.826320  515697 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/addons for local assets ...
	I1227 10:30:58.826375  515697 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-297941/.minikube/files for local assets ...
	I1227 10:30:58.826467  515697 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem -> 2998112.pem in /etc/ssl/certs
	I1227 10:30:58.826575  515697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:30:58.835567  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:30:58.854256  515697 start.go:296] duration metric: took 158.12479ms for postStartSetup
	I1227 10:30:58.854654  515697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-443576
	I1227 10:30:58.872667  515697 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/config.json ...
	I1227 10:30:58.872943  515697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:30:58.872997  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:58.891074  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:30:58.989177  515697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:30:58.993941  515697 start.go:128] duration metric: took 10.902647547s to createHost
	I1227 10:30:58.993965  515697 start.go:83] releasing machines lock for "newest-cni-443576", held for 10.902775319s
	I1227 10:30:58.994037  515697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-443576
	I1227 10:30:59.015419  515697 ssh_runner.go:195] Run: cat /version.json
	I1227 10:30:59.015470  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:59.015526  515697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:30:59.015601  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:30:59.037766  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:30:59.039181  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:30:59.131963  515697 ssh_runner.go:195] Run: systemctl --version
	I1227 10:30:59.251466  515697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1227 10:30:59.293103  515697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:30:59.297685  515697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:30:59.297763  515697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:30:59.331251  515697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:30:59.331285  515697 start.go:496] detecting cgroup driver to use...
	I1227 10:30:59.331320  515697 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:30:59.331375  515697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 10:30:59.352837  515697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 10:30:59.366833  515697 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:30:59.366898  515697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:30:59.385453  515697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:30:59.405856  515697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:30:59.528930  515697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:30:59.655137  515697 docker.go:234] disabling docker service ...
	I1227 10:30:59.655258  515697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:30:59.683279  515697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:30:59.697894  515697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:30:59.826458  515697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:30:59.954467  515697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:30:59.969672  515697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:30:59.987127  515697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1227 10:30:59.987205  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:30:59.996094  515697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1227 10:30:59.996175  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.047420  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.098178  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.130612  515697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:31:00.145558  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.241379  515697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.303749  515697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1227 10:31:00.326397  515697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:31:00.340982  515697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:31:00.355141  515697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:31:00.533793  515697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1227 10:31:00.705140  515697 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I1227 10:31:00.705238  515697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1227 10:31:00.709612  515697 start.go:574] Will wait 60s for crictl version
	I1227 10:31:00.709681  515697 ssh_runner.go:195] Run: which crictl
	I1227 10:31:00.713569  515697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:31:00.739464  515697 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1227 10:31:00.739553  515697 ssh_runner.go:195] Run: crio --version
	I1227 10:31:00.768399  515697 ssh_runner.go:195] Run: crio --version
	I1227 10:31:00.800923  515697 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.34.3 ...
	I1227 10:31:00.803896  515697 cli_runner.go:164] Run: docker network inspect newest-cni-443576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:31:00.823946  515697 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:31:00.828025  515697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:31:00.841281  515697 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1227 10:31:00.844251  515697 kubeadm.go:884] updating cluster {Name:newest-cni-443576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:31:00.844404  515697 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:31:00.844479  515697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:31:00.880209  515697 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:31:00.880240  515697 crio.go:433] Images already preloaded, skipping extraction
	I1227 10:31:00.880297  515697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:31:00.905979  515697 crio.go:561] all images are preloaded for cri-o runtime.
	I1227 10:31:00.906004  515697 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:31:00.906014  515697 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I1227 10:31:00.906104  515697 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-443576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:31:00.906192  515697 ssh_runner.go:195] Run: crio config
	I1227 10:31:00.986419  515697 cni.go:84] Creating CNI manager for ""
	I1227 10:31:00.986443  515697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:31:00.986465  515697 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1227 10:31:00.986490  515697 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-443576 NodeName:newest-cni-443576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:31:00.986618  515697 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-443576"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:31:00.986689  515697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:31:01.002734  515697 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:31:01.002824  515697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:31:01.013674  515697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1227 10:31:01.029670  515697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:31:01.043495  515697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1227 10:31:01.058211  515697 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:31:01.062074  515697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:31:01.073140  515697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:31:01.202135  515697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:31:01.220551  515697 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576 for IP: 192.168.76.2
	I1227 10:31:01.220572  515697 certs.go:195] generating shared ca certs ...
	I1227 10:31:01.220589  515697 certs.go:227] acquiring lock for ca certs: {Name:mkf0455840442d82ad2865a090879fc8de65e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.220742  515697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key
	I1227 10:31:01.220785  515697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key
	I1227 10:31:01.220794  515697 certs.go:257] generating profile certs ...
	I1227 10:31:01.220855  515697 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.key
	I1227 10:31:01.220866  515697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.crt with IP's: []
	I1227 10:31:01.500299  515697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.crt ...
	I1227 10:31:01.500330  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.crt: {Name:mk27b2f1703e7ad03071d745625e8d67bf1df612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.500554  515697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.key ...
	I1227 10:31:01.500572  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/client.key: {Name:mk404625f8d36cbd78f1b568e4ef9e18bb075ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.500661  515697 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key.ca20e437
	I1227 10:31:01.500680  515697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt.ca20e437 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 10:31:01.588269  515697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt.ca20e437 ...
	I1227 10:31:01.588297  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt.ca20e437: {Name:mk29454875d1a4a7ee8adc3fcaf51d5bb4d705ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.588468  515697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key.ca20e437 ...
	I1227 10:31:01.588483  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key.ca20e437: {Name:mk2a7f5eccc8ec49ed8c6efb935a9f8f9bfcde90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.588569  515697 certs.go:382] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt.ca20e437 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt
	I1227 10:31:01.588646  515697 certs.go:386] copying /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key.ca20e437 -> /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key
	I1227 10:31:01.588717  515697 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.key
	I1227 10:31:01.588737  515697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.crt with IP's: []
	I1227 10:31:01.832033  515697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.crt ...
	I1227 10:31:01.832067  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.crt: {Name:mk5735db882065a1ec364cd7306f56721cca6054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.832257  515697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.key ...
	I1227 10:31:01.832272  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.key: {Name:mk1720aa67684702f71e0f4dddbb7c41098f2696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:01.832472  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem (1338 bytes)
	W1227 10:31:01.832519  515697 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811_empty.pem, impossibly tiny 0 bytes
	I1227 10:31:01.832534  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:31:01.832560  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:31:01.832588  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:31:01.832617  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/certs/key.pem (1675 bytes)
	I1227 10:31:01.832667  515697 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem (1708 bytes)
	I1227 10:31:01.833253  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:31:01.853223  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:31:01.873422  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:31:01.892060  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:31:01.911133  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 10:31:01.929859  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:31:01.948877  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:31:01.982511  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:31:02.005482  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/certs/299811.pem --> /usr/share/ca-certificates/299811.pem (1338 bytes)
	I1227 10:31:02.029021  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/ssl/certs/2998112.pem --> /usr/share/ca-certificates/2998112.pem (1708 bytes)
	I1227 10:31:02.052055  515697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-297941/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:31:02.073829  515697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:31:02.088165  515697 ssh_runner.go:195] Run: openssl version
	I1227 10:31:02.097022  515697 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2998112.pem
	I1227 10:31:02.107255  515697 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2998112.pem /etc/ssl/certs/2998112.pem
	I1227 10:31:02.116894  515697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2998112.pem
	I1227 10:31:02.121062  515697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:34 /usr/share/ca-certificates/2998112.pem
	I1227 10:31:02.121131  515697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2998112.pem
	I1227 10:31:02.162871  515697 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:31:02.173424  515697 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2998112.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:31:02.182302  515697 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:31:02.190406  515697 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:31:02.198225  515697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:31:02.202074  515697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:30 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:31:02.202144  515697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:31:02.244791  515697 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:31:02.254077  515697 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:31:02.262505  515697 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/299811.pem
	I1227 10:31:02.270692  515697 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/299811.pem /etc/ssl/certs/299811.pem
	I1227 10:31:02.278658  515697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299811.pem
	I1227 10:31:02.282361  515697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:34 /usr/share/ca-certificates/299811.pem
	I1227 10:31:02.282427  515697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299811.pem
	I1227 10:31:02.324496  515697 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:31:02.332690  515697 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/299811.pem /etc/ssl/certs/51391683.0
	I1227 10:31:02.340741  515697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:31:02.344969  515697 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:31:02.345067  515697 kubeadm.go:401] StartCluster: {Name:newest-cni-443576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:31:02.345192  515697 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1227 10:31:02.345259  515697 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:31:02.373208  515697 cri.go:96] found id: ""
	I1227 10:31:02.373283  515697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:31:02.381631  515697 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:31:02.390013  515697 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:31:02.390109  515697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:31:02.398535  515697 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:31:02.398554  515697 kubeadm.go:158] found existing configuration files:
	
	I1227 10:31:02.398610  515697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:31:02.406819  515697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:31:02.406893  515697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:31:02.415759  515697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:31:02.424440  515697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:31:02.424540  515697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:31:02.432344  515697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:31:02.440655  515697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:31:02.440751  515697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:31:02.448613  515697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:31:02.456813  515697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:31:02.456944  515697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:31:02.466276  515697 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:31:02.505238  515697 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:31:02.505302  515697 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:31:02.581332  515697 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:31:02.581409  515697 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:31:02.581450  515697 kubeadm.go:319] OS: Linux
	I1227 10:31:02.581500  515697 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:31:02.581553  515697 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:31:02.581604  515697 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:31:02.581655  515697 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:31:02.581707  515697 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:31:02.581759  515697 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:31:02.581805  515697 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:31:02.581857  515697 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:31:02.581907  515697 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:31:02.650205  515697 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:31:02.650366  515697 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:31:02.650499  515697 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:31:02.660124  515697 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:31:02.666548  515697 out.go:252]   - Generating certificates and keys ...
	I1227 10:31:02.666692  515697 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:31:02.666783  515697 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1227 10:31:01.676557  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	W1227 10:31:04.175007  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	I1227 10:31:02.753120  515697 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:31:03.008736  515697 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:31:03.355280  515697 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:31:03.873610  515697 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:31:03.964525  515697 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:31:03.964865  515697 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-443576] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:31:04.188126  515697 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:31:04.188512  515697 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-443576] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:31:04.482811  515697 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:31:04.662329  515697 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:31:04.955727  515697 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:31:04.956022  515697 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:31:05.287476  515697 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:31:05.365374  515697 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:31:05.433818  515697 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:31:05.514997  515697 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:31:06.051405  515697 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:31:06.052170  515697 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:31:06.054826  515697 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:31:06.058107  515697 out.go:252]   - Booting up control plane ...
	I1227 10:31:06.058222  515697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:31:06.058993  515697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:31:06.059823  515697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:31:06.076270  515697 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:31:06.076379  515697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:31:06.083780  515697 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:31:06.084137  515697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:31:06.084184  515697 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:31:06.228851  515697 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:31:06.228977  515697 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:31:07.228164  515697 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001477804s
	I1227 10:31:07.232819  515697 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 10:31:07.232915  515697 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 10:31:07.233223  515697 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 10:31:07.233316  515697 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1227 10:31:06.175500  512231 pod_ready.go:104] pod "coredns-7d764666f9-5p545" is not "Ready", error: <nil>
	I1227 10:31:08.674830  512231 pod_ready.go:94] pod "coredns-7d764666f9-5p545" is "Ready"
	I1227 10:31:08.674864  512231 pod_ready.go:86] duration metric: took 34.506418584s for pod "coredns-7d764666f9-5p545" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.678036  512231 pod_ready.go:83] waiting for pod "etcd-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.683079  512231 pod_ready.go:94] pod "etcd-no-preload-241090" is "Ready"
	I1227 10:31:08.683105  512231 pod_ready.go:86] duration metric: took 5.037713ms for pod "etcd-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.685337  512231 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.693678  512231 pod_ready.go:94] pod "kube-apiserver-no-preload-241090" is "Ready"
	I1227 10:31:08.693704  512231 pod_ready.go:86] duration metric: took 8.343084ms for pod "kube-apiserver-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.696061  512231 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:08.872178  512231 pod_ready.go:94] pod "kube-controller-manager-no-preload-241090" is "Ready"
	I1227 10:31:08.872257  512231 pod_ready.go:86] duration metric: took 176.1169ms for pod "kube-controller-manager-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:09.072688  512231 pod_ready.go:83] waiting for pod "kube-proxy-8xv88" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:09.472735  512231 pod_ready.go:94] pod "kube-proxy-8xv88" is "Ready"
	I1227 10:31:09.472780  512231 pod_ready.go:86] duration metric: took 400.067073ms for pod "kube-proxy-8xv88" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:09.673252  512231 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:10.072289  512231 pod_ready.go:94] pod "kube-scheduler-no-preload-241090" is "Ready"
	I1227 10:31:10.072315  512231 pod_ready.go:86] duration metric: took 399.036302ms for pod "kube-scheduler-no-preload-241090" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:31:10.072328  512231 pod_ready.go:40] duration metric: took 35.908081379s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:31:10.154141  512231 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:31:10.157185  512231 out.go:203] 
	W1227 10:31:10.160065  512231 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:31:10.162850  512231 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:31:10.165699  512231 out.go:179] * Done! kubectl is now configured to use "no-preload-241090" cluster and "default" namespace by default
	I1227 10:31:09.747505  515697 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.514278837s
	I1227 10:31:10.957891  515697 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.725029005s
	I1227 10:31:12.734297  515697 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501385134s
	I1227 10:31:12.766618  515697 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 10:31:12.781395  515697 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 10:31:12.798518  515697 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 10:31:12.798724  515697 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-443576 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 10:31:12.816666  515697 kubeadm.go:319] [bootstrap-token] Using token: 9unjxw.u0987e039sxivp41
	I1227 10:31:12.819773  515697 out.go:252]   - Configuring RBAC rules ...
	I1227 10:31:12.819915  515697 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 10:31:12.824415  515697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 10:31:12.833307  515697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 10:31:12.841763  515697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 10:31:12.846427  515697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 10:31:12.851137  515697 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 10:31:13.142811  515697 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 10:31:13.574149  515697 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 10:31:14.141060  515697 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 10:31:14.142169  515697 kubeadm.go:319] 
	I1227 10:31:14.142248  515697 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 10:31:14.142253  515697 kubeadm.go:319] 
	I1227 10:31:14.142330  515697 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 10:31:14.142335  515697 kubeadm.go:319] 
	I1227 10:31:14.142360  515697 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 10:31:14.142418  515697 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 10:31:14.142470  515697 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 10:31:14.142474  515697 kubeadm.go:319] 
	I1227 10:31:14.142534  515697 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 10:31:14.142540  515697 kubeadm.go:319] 
	I1227 10:31:14.142588  515697 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 10:31:14.142591  515697 kubeadm.go:319] 
	I1227 10:31:14.142642  515697 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 10:31:14.142718  515697 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 10:31:14.142786  515697 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 10:31:14.142790  515697 kubeadm.go:319] 
	I1227 10:31:14.142874  515697 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 10:31:14.142968  515697 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 10:31:14.142973  515697 kubeadm.go:319] 
	I1227 10:31:14.143062  515697 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9unjxw.u0987e039sxivp41 \
	I1227 10:31:14.143186  515697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 \
	I1227 10:31:14.143208  515697 kubeadm.go:319] 	--control-plane 
	I1227 10:31:14.143212  515697 kubeadm.go:319] 
	I1227 10:31:14.143297  515697 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 10:31:14.143300  515697 kubeadm.go:319] 
	I1227 10:31:14.143383  515697 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9unjxw.u0987e039sxivp41 \
	I1227 10:31:14.143485  515697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8100ef36278c5f9d6ea8dbffe90eac624e0660246170a3269d1d3fdab84af875 
	I1227 10:31:14.148379  515697 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:31:14.148810  515697 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:31:14.148923  515697 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:31:14.148944  515697 cni.go:84] Creating CNI manager for ""
	I1227 10:31:14.148952  515697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:31:14.152324  515697 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 10:31:14.155359  515697 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 10:31:14.159782  515697 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 10:31:14.159802  515697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 10:31:14.176175  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 10:31:14.505318  515697 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 10:31:14.505474  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:14.505476  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-443576 minikube.k8s.io/updated_at=2025_12_27T10_31_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=newest-cni-443576 minikube.k8s.io/primary=true
	I1227 10:31:14.671145  515697 ops.go:34] apiserver oom_adj: -16
	I1227 10:31:14.671266  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:15.171388  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:15.671701  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:16.171436  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:16.671693  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:17.172328  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:17.671871  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:18.172062  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:18.672045  515697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:31:18.792460  515697 kubeadm.go:1114] duration metric: took 4.287061597s to wait for elevateKubeSystemPrivileges
	I1227 10:31:18.792489  515697 kubeadm.go:403] duration metric: took 16.447445382s to StartCluster
	I1227 10:31:18.792508  515697 settings.go:142] acquiring lock: {Name:mkd8982e2b055d379b1ed67f6c962a36a3b427c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:18.792572  515697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:31:18.793491  515697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/kubeconfig: {Name:mk9b0395cf9df161cfb49b66943a0d24161dfe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:18.793728  515697 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:31:18.793814  515697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:31:18.794075  515697 config.go:182] Loaded profile config "newest-cni-443576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:31:18.794110  515697 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:31:18.794166  515697 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-443576"
	I1227 10:31:18.794183  515697 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-443576"
	I1227 10:31:18.794224  515697 host.go:66] Checking if "newest-cni-443576" exists ...
	I1227 10:31:18.795056  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:31:18.795211  515697 addons.go:70] Setting default-storageclass=true in profile "newest-cni-443576"
	I1227 10:31:18.795227  515697 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-443576"
	I1227 10:31:18.795467  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:31:18.800083  515697 out.go:179] * Verifying Kubernetes components...
	I1227 10:31:18.803148  515697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:31:18.838031  515697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:31:18.840473  515697 addons.go:239] Setting addon default-storageclass=true in "newest-cni-443576"
	I1227 10:31:18.840516  515697 host.go:66] Checking if "newest-cni-443576" exists ...
	I1227 10:31:18.840992  515697 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:31:18.841008  515697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:31:18.841062  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:31:18.843758  515697 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:31:18.876080  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:31:18.882393  515697 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:31:18.882414  515697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:31:18.882477  515697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:31:18.916112  515697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:31:19.157173  515697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:31:19.171104  515697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:31:19.171233  515697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:31:19.187596  515697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:31:20.100778  515697 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:31:20.100898  515697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:31:20.101062  515697 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 10:31:20.139014  515697 api_server.go:72] duration metric: took 1.345254447s to wait for apiserver process to appear ...
	I1227 10:31:20.139039  515697 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:31:20.139062  515697 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:31:20.153115  515697 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:31:20.158296  515697 api_server.go:141] control plane version: v1.35.0
	I1227 10:31:20.158377  515697 api_server.go:131] duration metric: took 19.330723ms to wait for apiserver health ...
	I1227 10:31:20.158402  515697 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:31:20.179855  515697 system_pods.go:59] 9 kube-system pods found
	I1227 10:31:20.180045  515697 system_pods.go:61] "coredns-7d764666f9-kndw2" [92c4f590-9ed1-4e08-a1d5-6e21b6bd13ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:31:20.180083  515697 system_pods.go:61] "coredns-7d764666f9-w5pw2" [90fb0105-0460-434f-9823-e2a713de5c12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:31:20.180092  515697 system_pods.go:61] "etcd-newest-cni-443576" [7748ef5b-d12d-4ab3-84cc-bcbfff673ff6] Running
	I1227 10:31:20.180100  515697 system_pods.go:61] "kindnet-5d2fh" [50656616-4132-47e7-a39a-86fcb9ca8a73] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1227 10:31:20.180105  515697 system_pods.go:61] "kube-apiserver-newest-cni-443576" [8edbae58-8cd9-4086-81f0-1f57bea869e4] Running
	I1227 10:31:20.180148  515697 system_pods.go:61] "kube-controller-manager-newest-cni-443576" [8dbcddda-24bc-492b-8638-fe0d2d40c827] Running
	I1227 10:31:20.180156  515697 system_pods.go:61] "kube-proxy-xj5vc" [dda1f65f-8d91-4868-a723-87bf2ec5bef8] Running
	I1227 10:31:20.180161  515697 system_pods.go:61] "kube-scheduler-newest-cni-443576" [12ee9bd3-c8e6-4d60-97f5-a74b7ad7fe94] Running
	I1227 10:31:20.180167  515697 system_pods.go:61] "storage-provisioner" [ddc68cbe-819e-4a44-a2da-fd69d485cde1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1227 10:31:20.180174  515697 system_pods.go:74] duration metric: took 21.751047ms to wait for pod list to return data ...
	I1227 10:31:20.180183  515697 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:31:20.181324  515697 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 10:31:20.184590  515697 addons.go:530] duration metric: took 1.390469233s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 10:31:20.203087  515697 default_sa.go:45] found service account: "default"
	I1227 10:31:20.203110  515697 default_sa.go:55] duration metric: took 22.921784ms for default service account to be created ...
	I1227 10:31:20.203124  515697 kubeadm.go:587] duration metric: took 1.40937244s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:31:20.203140  515697 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:31:20.220017  515697 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:31:20.220048  515697 node_conditions.go:123] node cpu capacity is 2
	I1227 10:31:20.220114  515697 node_conditions.go:105] duration metric: took 16.968148ms to run NodePressure ...
	I1227 10:31:20.220128  515697 start.go:242] waiting for startup goroutines ...
	I1227 10:31:20.605242  515697 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-443576" context rescaled to 1 replicas
	I1227 10:31:20.605321  515697 start.go:247] waiting for cluster config update ...
	I1227 10:31:20.605348  515697 start.go:256] writing updated cluster config ...
	I1227 10:31:20.605675  515697 ssh_runner.go:195] Run: rm -f paused
	I1227 10:31:20.673734  515697 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:31:20.677225  515697 out.go:203] 
	W1227 10:31:20.680112  515697 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:31:20.683055  515697 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:31:20.686036  515697 out.go:179] * Done! kubectl is now configured to use "newest-cni-443576" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 27 10:31:01 no-preload-241090 crio[655]: time="2025-12-27T10:31:01.276796498Z" level=info msg="Removed container 32bac02e49857af8d168ba9a94075c50b8a556253cfe63d3529fed32548e45da: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v/dashboard-metrics-scraper" id=75e17530-1bc2-418a-a6d9-e92a955ecf1d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:31:03 no-preload-241090 conmon[1167]: conmon 72ffaa9a1bdb7fd593a1 <ninfo>: container 1175 exited with status 1
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.247869665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=64832228-90e2-47a9-b81f-85b1d11c6c3a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.249277286Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=17c8040e-4e2c-4d87-bbe6-e7fa705ba0d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.255162244Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8f5b72a0-bee2-4cb7-b0b2-c9d6768bbf18 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.255270176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.264309275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.264491939Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d9edd24076e5e6c3687b430de7df5c57f99fdc38bb994c507758d745325e48cb/merged/etc/passwd: no such file or directory"
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.264513757Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d9edd24076e5e6c3687b430de7df5c57f99fdc38bb994c507758d745325e48cb/merged/etc/group: no such file or directory"
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.264877206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.295441103Z" level=info msg="Created container f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d: kube-system/storage-provisioner/storage-provisioner" id=8f5b72a0-bee2-4cb7-b0b2-c9d6768bbf18 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.300182025Z" level=info msg="Starting container: f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d" id=d56a5d7b-f3b9-492d-846b-ac930e9f82df name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.309471383Z" level=info msg="Started container" PID=1660 containerID=f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d description=kube-system/storage-provisioner/storage-provisioner id=d56a5d7b-f3b9-492d-846b-ac930e9f82df name=/runtime.v1.RuntimeService/StartContainer sandboxID=1681eff56fe9a005a5494f99f0f99ff34aba42ef35bc7a5fff7677b310a2eb9d
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.923209161Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.929278638Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.929329879Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.929357777Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.932836492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.932869444Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.93289296Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.936432007Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.936628464Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.936720568Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.94329323Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.943327732Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f2e7e72f94b16       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   1681eff56fe9a       storage-provisioner                          kube-system
	f85ff0f366cab       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   305388c3c12bb       dashboard-metrics-scraper-867fb5f87b-8pk5v   kubernetes-dashboard
	d6dac70955d66       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago      Running             kubernetes-dashboard        0                   098c3ab532fc7       kubernetes-dashboard-b84665fb8-8fsf7         kubernetes-dashboard
	b0b9d5cfc0f2d       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           52 seconds ago      Running             coredns                     1                   9dce979620fbd       coredns-7d764666f9-5p545                     kube-system
	72ffaa9a1bdb7       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago      Exited              storage-provisioner         1                   1681eff56fe9a       storage-provisioner                          kube-system
	df7e7c4968201       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   c56b64a00e1b4       busybox                                      default
	ec04e59ae6c95       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           52 seconds ago      Running             kindnet-cni                 1                   71ed3dde7ff1c       kindnet-jh987                                kube-system
	c80de0856a7e9       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           52 seconds ago      Running             kube-proxy                  1                   58d24cc3ee2b0       kube-proxy-8xv88                             kube-system
	0be2bd393e285       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           57 seconds ago      Running             kube-scheduler              1                   076d84287b404       kube-scheduler-no-preload-241090             kube-system
	5ef714a1055a6       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           57 seconds ago      Running             kube-controller-manager     1                   2d3a6cc3ed868       kube-controller-manager-no-preload-241090    kube-system
	4264015374f91       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           57 seconds ago      Running             kube-apiserver              1                   6d404281eed17       kube-apiserver-no-preload-241090             kube-system
	96e2bc84c864d       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           57 seconds ago      Running             etcd                        1                   d01e9091b928f       etcd-no-preload-241090                       kube-system
	
	
	==> coredns [b0b9d5cfc0f2dea1b46d3d2fc69780ae4cc75bc974985fea86b7ba9c545a6bb0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43285 - 29959 "HINFO IN 8390698111263921620.145518840492267225. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015758766s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-241090
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-241090
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=no-preload-241090
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_29_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:29:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-241090
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:31:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:31:03 +0000   Sat, 27 Dec 2025 10:29:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:31:03 +0000   Sat, 27 Dec 2025 10:29:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:31:03 +0000   Sat, 27 Dec 2025 10:29:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:31:03 +0000   Sat, 27 Dec 2025 10:29:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-241090
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                a9a49f95-a33e-4498-b8f5-c7af217c180a
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-5p545                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-no-preload-241090                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         114s
	  kube-system                 kindnet-jh987                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-241090              250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-241090     200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-8xv88                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-241090              100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-8pk5v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-8fsf7          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node no-preload-241090 event: Registered Node no-preload-241090 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node no-preload-241090 event: Registered Node no-preload-241090 in Controller
	
	
	==> dmesg <==
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	[Dec27 10:28] overlayfs: idmapped layers are currently not supported
	[Dec27 10:29] overlayfs: idmapped layers are currently not supported
	[ +34.978626] overlayfs: idmapped layers are currently not supported
	[Dec27 10:30] overlayfs: idmapped layers are currently not supported
	[Dec27 10:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [96e2bc84c864d4d7cc89f0f2517101b59c5cc5096c04209185554cf59b742f37] <==
	{"level":"info","ts":"2025-12-27T10:30:28.860529Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:30:28.860539Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:30:28.860725Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:30:28.860735Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:30:28.864302Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-27T10:30:28.864432Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:30:28.864528Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:30:29.804018Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:30:29.804174Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:30:29.804266Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:30:29.804330Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:30:29.804373Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:30:29.805683Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:30:29.805748Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:30:29.805790Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:30:29.805825Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:30:29.807041Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-241090 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:30:29.807114Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:30:29.807169Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:30:29.812055Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:30:29.817210Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:30:29.821301Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:30:29.885298Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T10:30:29.824668Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:30:29.908167Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:31:26 up  2:13,  0 user,  load average: 4.55, 3.04, 2.35
	Linux no-preload-241090 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec04e59ae6c952c8927f173f6ba8de9972a72aeb5ef19a32d5625b317dc3d76e] <==
	I1227 10:30:33.653022       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:30:33.717203       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:30:33.717423       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:30:33.717468       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:30:33.717511       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:30:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:30:33.922695       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:30:33.922791       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:30:33.922827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:30:33.923691       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:31:03.923235       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:31:03.923348       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:31:03.923409       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 10:31:03.924587       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 10:31:05.528521       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:31:05.528563       1 metrics.go:72] Registering metrics
	I1227 10:31:05.528632       1 controller.go:711] "Syncing nftables rules"
	I1227 10:31:13.922872       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:31:13.922965       1 main.go:301] handling current node
	I1227 10:31:23.928903       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:31:23.928944       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4264015374f91b531af599acfc367aa072b442eccc1ffead423255914a0d9f09] <==
	I1227 10:30:32.522002       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 10:30:32.522349       1 aggregator.go:187] initial CRD sync complete...
	I1227 10:30:32.522367       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 10:30:32.522373       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:30:32.522378       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:30:32.532126       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 10:30:32.532150       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:30:32.532257       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:30:32.532330       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:30:32.538632       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:30:32.548932       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1227 10:30:32.570346       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:30:32.580202       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:30:32.595951       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:30:32.942535       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:30:33.053742       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:30:33.231386       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:30:33.393731       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:30:33.515996       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:30:33.568881       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:30:33.858248       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.254.7"}
	I1227 10:30:33.894082       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.174.251"}
	I1227 10:30:35.969260       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:30:36.019783       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:30:36.064907       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5ef714a1055a6cf93a2f1f0f649e4d4fa6f789af9150c2755a1c2d09b53037b1] <==
	I1227 10:30:35.480852       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481019       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481030       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481038       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.484732       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481083       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.484837       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:30:35.484888       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:30:35.484898       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:30:35.484904       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481091       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481097       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481103       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481109       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481129       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481046       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481052       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481058       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.498971       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:30:35.481077       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.529134       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.584166       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.584318       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:30:35.584350       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:30:35.601441       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [c80de0856a7e97dcdca435688c0cce0be0c03d163eb8a2f5a8dcb13ec35e129d] <==
	I1227 10:30:33.685218       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:30:33.868806       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:30:33.969132       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:33.969178       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 10:30:33.969277       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:30:34.143929       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:30:34.145234       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:30:34.176701       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:30:34.177092       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:30:34.177118       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:30:34.180864       1 config.go:200] "Starting service config controller"
	I1227 10:30:34.180887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:30:34.180910       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:30:34.180914       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:30:34.180925       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:30:34.180929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:30:34.181563       1 config.go:309] "Starting node config controller"
	I1227 10:30:34.181617       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:30:34.181625       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:30:34.281294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:30:34.281488       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:30:34.282634       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0be2bd393e285cb49c8e5b5f66063ce6781e934558ad30c47aa3aec488565ab9] <==
	I1227 10:30:30.421700       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:30:32.248923       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:30:32.248955       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:30:32.248965       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:30:32.248982       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:30:32.401863       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:30:32.401900       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:30:32.414773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:30:32.414997       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:30:32.415043       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:30:32.415081       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:30:32.515222       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:30:47 no-preload-241090 kubelet[776]: I1227 10:30:47.829839     776 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-8fsf7" podStartSLOduration=4.452244644 podStartE2EDuration="11.829822604s" podCreationTimestamp="2025-12-27 10:30:36 +0000 UTC" firstStartedPulling="2025-12-27 10:30:36.573604434 +0000 UTC m=+8.828413173" lastFinishedPulling="2025-12-27 10:30:43.951182393 +0000 UTC m=+16.205991133" observedRunningTime="2025-12-27 10:30:44.176789985 +0000 UTC m=+16.431598733" watchObservedRunningTime="2025-12-27 10:30:47.829822604 +0000 UTC m=+20.084631344"
	Dec 27 10:30:48 no-preload-241090 kubelet[776]: E1227 10:30:48.181526     776 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-241090" containerName="kube-apiserver"
	Dec 27 10:30:50 no-preload-241090 kubelet[776]: E1227 10:30:50.188744     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:50 no-preload-241090 kubelet[776]: I1227 10:30:50.188790     776 scope.go:122] "RemoveContainer" containerID="98423e1437c047022ba4f3d40d968606a61840076744cacfa8f6f227f945c1ed"
	Dec 27 10:30:51 no-preload-241090 kubelet[776]: I1227 10:30:51.192988     776 scope.go:122] "RemoveContainer" containerID="98423e1437c047022ba4f3d40d968606a61840076744cacfa8f6f227f945c1ed"
	Dec 27 10:30:51 no-preload-241090 kubelet[776]: E1227 10:30:51.193168     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:51 no-preload-241090 kubelet[776]: I1227 10:30:51.193607     776 scope.go:122] "RemoveContainer" containerID="32bac02e49857af8d168ba9a94075c50b8a556253cfe63d3529fed32548e45da"
	Dec 27 10:30:51 no-preload-241090 kubelet[776]: E1227 10:30:51.193795     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8pk5v_kubernetes-dashboard(0bb73329-3545-4a00-bed6-1f34345ded26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" podUID="0bb73329-3545-4a00-bed6-1f34345ded26"
	Dec 27 10:30:52 no-preload-241090 kubelet[776]: E1227 10:30:52.198066     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:52 no-preload-241090 kubelet[776]: I1227 10:30:52.198572     776 scope.go:122] "RemoveContainer" containerID="32bac02e49857af8d168ba9a94075c50b8a556253cfe63d3529fed32548e45da"
	Dec 27 10:30:52 no-preload-241090 kubelet[776]: E1227 10:30:52.198830     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8pk5v_kubernetes-dashboard(0bb73329-3545-4a00-bed6-1f34345ded26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" podUID="0bb73329-3545-4a00-bed6-1f34345ded26"
	Dec 27 10:31:00 no-preload-241090 kubelet[776]: E1227 10:31:00.939400     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:31:00 no-preload-241090 kubelet[776]: I1227 10:31:00.939907     776 scope.go:122] "RemoveContainer" containerID="32bac02e49857af8d168ba9a94075c50b8a556253cfe63d3529fed32548e45da"
	Dec 27 10:31:01 no-preload-241090 kubelet[776]: I1227 10:31:01.236188     776 scope.go:122] "RemoveContainer" containerID="32bac02e49857af8d168ba9a94075c50b8a556253cfe63d3529fed32548e45da"
	Dec 27 10:31:01 no-preload-241090 kubelet[776]: E1227 10:31:01.236561     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:31:01 no-preload-241090 kubelet[776]: I1227 10:31:01.237168     776 scope.go:122] "RemoveContainer" containerID="f85ff0f366cab3121436ea435162a86c87b97787dc26bbbc8b0dd95316f338c4"
	Dec 27 10:31:01 no-preload-241090 kubelet[776]: E1227 10:31:01.237465     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8pk5v_kubernetes-dashboard(0bb73329-3545-4a00-bed6-1f34345ded26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" podUID="0bb73329-3545-4a00-bed6-1f34345ded26"
	Dec 27 10:31:04 no-preload-241090 kubelet[776]: I1227 10:31:04.246711     776 scope.go:122] "RemoveContainer" containerID="72ffaa9a1bdb7fd593a1ecea6b92e25f4d4fa7299fed5ff41307fd00c3e24018"
	Dec 27 10:31:08 no-preload-241090 kubelet[776]: E1227 10:31:08.458027     776 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5p545" containerName="coredns"
	Dec 27 10:31:10 no-preload-241090 kubelet[776]: E1227 10:31:10.939264     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:31:10 no-preload-241090 kubelet[776]: I1227 10:31:10.939764     776 scope.go:122] "RemoveContainer" containerID="f85ff0f366cab3121436ea435162a86c87b97787dc26bbbc8b0dd95316f338c4"
	Dec 27 10:31:10 no-preload-241090 kubelet[776]: E1227 10:31:10.940072     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8pk5v_kubernetes-dashboard(0bb73329-3545-4a00-bed6-1f34345ded26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" podUID="0bb73329-3545-4a00-bed6-1f34345ded26"
	Dec 27 10:31:22 no-preload-241090 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:31:23 no-preload-241090 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:31:23 no-preload-241090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d6dac70955d6676da0bde6507fddc874505c8c28166dd15dd89c7d34aac1b578] <==
	2025/12/27 10:30:44 Using namespace: kubernetes-dashboard
	2025/12/27 10:30:44 Using in-cluster config to connect to apiserver
	2025/12/27 10:30:44 Using secret token for csrf signing
	2025/12/27 10:30:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:30:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:30:44 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:30:44 Generating JWE encryption key
	2025/12/27 10:30:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:30:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:30:44 Initializing JWE encryption key from synchronized object
	2025/12/27 10:30:44 Creating in-cluster Sidecar client
	2025/12/27 10:30:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:30:44 Serving insecurely on HTTP port: 9090
	2025/12/27 10:31:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:30:44 Starting overwatch
	
	
	==> storage-provisioner [72ffaa9a1bdb7fd593a1ecea6b92e25f4d4fa7299fed5ff41307fd00c3e24018] <==
	I1227 10:30:33.742312       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:31:03.748428       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d] <==
	I1227 10:31:04.329657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:31:04.343123       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:31:04.344573       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:31:04.348294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:07.808196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:12.075854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:15.674378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:18.727999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:21.750042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:21.764279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:31:21.764518       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:31:21.764832       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-241090_16b0c62f-838c-4f9f-90de-297e4e94d598!
	I1227 10:31:21.772947       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b161aa6-6257-4755-8180-933059c7757e", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-241090_16b0c62f-838c-4f9f-90de-297e4e94d598 became leader
	W1227 10:31:21.791086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:21.827632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:31:21.874490       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-241090_16b0c62f-838c-4f9f-90de-297e4e94d598!
	W1227 10:31:23.831016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:23.836641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:25.847439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:25.853396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241090 -n no-preload-241090
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241090 -n no-preload-241090: exit status 2 (633.450897ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-241090 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-241090
helpers_test.go:244: (dbg) docker inspect no-preload-241090:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a",
	        "Created": "2025-12-27T10:28:59.433064249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 512359,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:30:20.810719909Z",
	            "FinishedAt": "2025-12-27T10:30:20.021260993Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/hosts",
	        "LogPath": "/var/lib/docker/containers/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a/f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a-json.log",
	        "Name": "/no-preload-241090",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-241090:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-241090",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f3d580a4684b6edf1270649beef3b8746971f6510c656ffa3bbbe3b77d46b84a",
	                "LowerDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ee591eb30e64320f58bc876e5f4c3e70bec0ad1db2be9ba637a3b1ce3440506f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-241090",
	                "Source": "/var/lib/docker/volumes/no-preload-241090/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-241090",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-241090",
	                "name.minikube.sigs.k8s.io": "no-preload-241090",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9ec164e021771abb81a3fbf99651f22c7d209aacc71f567c09b090d915edeec",
	            "SandboxKey": "/var/run/docker/netns/f9ec164e0217",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-241090": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:97:06:cc:93:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8d3a00ff7095640f7433799c8c32b498081b342b7c8dd02f4d6cb45f97d8125",
	                    "EndpointID": "ddf1a53e82b251e6c894bc500cc7d8cd8bb1209cdf4e33558208ce75e9a0b147",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-241090",
	                        "f3d580a4684b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241090 -n no-preload-241090
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241090 -n no-preload-241090: exit status 2 (349.378707ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241090 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-241090 logs -n 25: (1.248540622s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p default-k8s-diff-port-784377                                                                                                                                                                                                               │ default-k8s-diff-port-784377 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ ssh     │ force-systemd-flag-915850 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p force-systemd-flag-915850                                                                                                                                                                                                                  │ force-systemd-flag-915850    │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ delete  │ -p disable-driver-mounts-913868                                                                                                                                                                                                               │ disable-driver-mounts-913868 │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:28 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:28 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable metrics-server -p embed-certs-367691 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │                     │
	│ stop    │ -p embed-certs-367691 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ addons  │ enable dashboard -p embed-certs-367691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:29 UTC │
	│ start   │ -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:29 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable metrics-server -p no-preload-241090 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ stop    │ -p no-preload-241090 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable dashboard -p no-preload-241090 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:31 UTC │
	│ image   │ embed-certs-367691 image list --format=json                                                                                                                                                                                                   │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ pause   │ -p embed-certs-367691 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ delete  │ -p embed-certs-367691                                                                                                                                                                                                                         │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ delete  │ -p embed-certs-367691                                                                                                                                                                                                                         │ embed-certs-367691           │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-443576            │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:31 UTC │
	│ addons  │ enable metrics-server -p newest-cni-443576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-443576            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ image   │ no-preload-241090 image list --format=json                                                                                                                                                                                                    │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ pause   │ -p no-preload-241090 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-241090            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ stop    │ -p newest-cni-443576 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-443576            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ addons  │ enable dashboard -p newest-cni-443576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-443576            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-443576            │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:31:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:31:25.867797  519461 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:31:25.868100  519461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:31:25.868134  519461 out.go:374] Setting ErrFile to fd 2...
	I1227 10:31:25.868251  519461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:31:25.868609  519461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:31:25.869161  519461 out.go:368] Setting JSON to false
	I1227 10:31:25.870420  519461 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8039,"bootTime":1766823447,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:31:25.870519  519461 start.go:143] virtualization:  
	I1227 10:31:25.873874  519461 out.go:179] * [newest-cni-443576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:31:25.878221  519461 notify.go:221] Checking for updates...
	I1227 10:31:25.882624  519461 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:31:25.885608  519461 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:31:25.888574  519461 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:31:25.891421  519461 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:31:25.894462  519461 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:31:25.897308  519461 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:31:25.900709  519461 config.go:182] Loaded profile config "newest-cni-443576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:31:25.901264  519461 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:31:25.933988  519461 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:31:25.934094  519461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:31:26.044700  519461 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:31:26.035077376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:31:26.044809  519461 docker.go:319] overlay module found
	I1227 10:31:26.047948  519461 out.go:179] * Using the docker driver based on existing profile
	I1227 10:31:26.051801  519461 start.go:309] selected driver: docker
	I1227 10:31:26.051867  519461 start.go:928] validating driver "docker" against &{Name:newest-cni-443576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:31:26.052043  519461 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:31:26.052878  519461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:31:26.149678  519461 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:31:26.137260446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:31:26.150142  519461 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1227 10:31:26.150175  519461 cni.go:84] Creating CNI manager for ""
	I1227 10:31:26.150228  519461 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:31:26.150259  519461 start.go:353] cluster config:
	{Name:newest-cni-443576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-443576 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:31:26.153485  519461 out.go:179] * Starting "newest-cni-443576" primary control-plane node in "newest-cni-443576" cluster
	I1227 10:31:26.156504  519461 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:31:26.159128  519461 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:31:26.162748  519461 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:31:26.162750  519461 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:31:26.162805  519461 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:31:26.162822  519461 cache.go:65] Caching tarball of preloaded images
	I1227 10:31:26.162903  519461 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:31:26.162914  519461 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:31:26.163100  519461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/newest-cni-443576/config.json ...
	I1227 10:31:26.199984  519461 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:31:26.200005  519461 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:31:26.200021  519461 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:31:26.200057  519461 start.go:360] acquireMachinesLock for newest-cni-443576: {Name:mka565ad41fecac1e9f8cd8d651491fd96f86258 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:31:26.200110  519461 start.go:364] duration metric: took 36.275µs to acquireMachinesLock for "newest-cni-443576"
	I1227 10:31:26.200128  519461 start.go:96] Skipping create...Using existing machine configuration
	I1227 10:31:26.200133  519461 fix.go:54] fixHost starting: 
	I1227 10:31:26.200394  519461 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:31:26.224644  519461 fix.go:112] recreateIfNeeded on newest-cni-443576: state=Stopped err=<nil>
	W1227 10:31:26.224675  519461 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 27 10:31:01 no-preload-241090 crio[655]: time="2025-12-27T10:31:01.276796498Z" level=info msg="Removed container 32bac02e49857af8d168ba9a94075c50b8a556253cfe63d3529fed32548e45da: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v/dashboard-metrics-scraper" id=75e17530-1bc2-418a-a6d9-e92a955ecf1d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 27 10:31:03 no-preload-241090 conmon[1167]: conmon 72ffaa9a1bdb7fd593a1 <ninfo>: container 1175 exited with status 1
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.247869665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=64832228-90e2-47a9-b81f-85b1d11c6c3a name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.249277286Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=17c8040e-4e2c-4d87-bbe6-e7fa705ba0d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.255162244Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8f5b72a0-bee2-4cb7-b0b2-c9d6768bbf18 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.255270176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.264309275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.264491939Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d9edd24076e5e6c3687b430de7df5c57f99fdc38bb994c507758d745325e48cb/merged/etc/passwd: no such file or directory"
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.264513757Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d9edd24076e5e6c3687b430de7df5c57f99fdc38bb994c507758d745325e48cb/merged/etc/group: no such file or directory"
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.264877206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.295441103Z" level=info msg="Created container f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d: kube-system/storage-provisioner/storage-provisioner" id=8f5b72a0-bee2-4cb7-b0b2-c9d6768bbf18 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.300182025Z" level=info msg="Starting container: f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d" id=d56a5d7b-f3b9-492d-846b-ac930e9f82df name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:31:04 no-preload-241090 crio[655]: time="2025-12-27T10:31:04.309471383Z" level=info msg="Started container" PID=1660 containerID=f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d description=kube-system/storage-provisioner/storage-provisioner id=d56a5d7b-f3b9-492d-846b-ac930e9f82df name=/runtime.v1.RuntimeService/StartContainer sandboxID=1681eff56fe9a005a5494f99f0f99ff34aba42ef35bc7a5fff7677b310a2eb9d
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.923209161Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.929278638Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.929329879Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.929357777Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.932836492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.932869444Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.93289296Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.936432007Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.936628464Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.936720568Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.94329323Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 27 10:31:13 no-preload-241090 crio[655]: time="2025-12-27T10:31:13.943327732Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f2e7e72f94b16       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   1681eff56fe9a       storage-provisioner                          kube-system
	f85ff0f366cab       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago       Exited              dashboard-metrics-scraper   2                   305388c3c12bb       dashboard-metrics-scraper-867fb5f87b-8pk5v   kubernetes-dashboard
	d6dac70955d66       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   098c3ab532fc7       kubernetes-dashboard-b84665fb8-8fsf7         kubernetes-dashboard
	b0b9d5cfc0f2d       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           55 seconds ago       Running             coredns                     1                   9dce979620fbd       coredns-7d764666f9-5p545                     kube-system
	72ffaa9a1bdb7       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   1681eff56fe9a       storage-provisioner                          kube-system
	df7e7c4968201       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   c56b64a00e1b4       busybox                                      default
	ec04e59ae6c95       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   71ed3dde7ff1c       kindnet-jh987                                kube-system
	c80de0856a7e9       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           55 seconds ago       Running             kube-proxy                  1                   58d24cc3ee2b0       kube-proxy-8xv88                             kube-system
	0be2bd393e285       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   076d84287b404       kube-scheduler-no-preload-241090             kube-system
	5ef714a1055a6       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   2d3a6cc3ed868       kube-controller-manager-no-preload-241090    kube-system
	4264015374f91       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   6d404281eed17       kube-apiserver-no-preload-241090             kube-system
	96e2bc84c864d       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   d01e9091b928f       etcd-no-preload-241090                       kube-system
	
	
	==> coredns [b0b9d5cfc0f2dea1b46d3d2fc69780ae4cc75bc974985fea86b7ba9c545a6bb0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43285 - 29959 "HINFO IN 8390698111263921620.145518840492267225. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015758766s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-241090
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-241090
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=no-preload-241090
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_29_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:29:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-241090
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:31:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:31:03 +0000   Sat, 27 Dec 2025 10:29:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:31:03 +0000   Sat, 27 Dec 2025 10:29:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:31:03 +0000   Sat, 27 Dec 2025 10:29:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 27 Dec 2025 10:31:03 +0000   Sat, 27 Dec 2025 10:29:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-241090
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                a9a49f95-a33e-4498-b8f5-c7af217c180a
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-5p545                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-no-preload-241090                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-jh987                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-241090              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-241090     200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-8xv88                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-241090              100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-8pk5v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-8fsf7          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  113s  node-controller  Node no-preload-241090 event: Registered Node no-preload-241090 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-241090 event: Registered Node no-preload-241090 in Controller
	
	
	==> dmesg <==
	[Dec27 10:01] overlayfs: idmapped layers are currently not supported
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	[Dec27 10:28] overlayfs: idmapped layers are currently not supported
	[Dec27 10:29] overlayfs: idmapped layers are currently not supported
	[ +34.978626] overlayfs: idmapped layers are currently not supported
	[Dec27 10:30] overlayfs: idmapped layers are currently not supported
	[Dec27 10:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [96e2bc84c864d4d7cc89f0f2517101b59c5cc5096c04209185554cf59b742f37] <==
	{"level":"info","ts":"2025-12-27T10:30:28.860529Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:30:28.860539Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-27T10:30:28.860725Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:30:28.860735Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-27T10:30:28.864302Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-27T10:30:28.864432Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-27T10:30:28.864528Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-27T10:30:29.804018Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:30:29.804174Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:30:29.804266Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-27T10:30:29.804330Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:30:29.804373Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:30:29.805683Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:30:29.805748Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:30:29.805790Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:30:29.805825Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-27T10:30:29.807041Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-241090 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:30:29.807114Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:30:29.807169Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:30:29.812055Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:30:29.817210Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:30:29.821301Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:30:29.885298Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-27T10:30:29.824668Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:30:29.908167Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:31:28 up  2:14,  0 user,  load average: 4.43, 3.04, 2.36
	Linux no-preload-241090 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec04e59ae6c952c8927f173f6ba8de9972a72aeb5ef19a32d5625b317dc3d76e] <==
	I1227 10:30:33.653022       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:30:33.717203       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1227 10:30:33.717423       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:30:33.717468       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:30:33.717511       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:30:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:30:33.922695       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:30:33.922791       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:30:33.922827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:30:33.923691       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1227 10:31:03.923235       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1227 10:31:03.923348       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1227 10:31:03.923409       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1227 10:31:03.924587       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1227 10:31:05.528521       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1227 10:31:05.528563       1 metrics.go:72] Registering metrics
	I1227 10:31:05.528632       1 controller.go:711] "Syncing nftables rules"
	I1227 10:31:13.922872       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:31:13.922965       1 main.go:301] handling current node
	I1227 10:31:23.928903       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1227 10:31:23.928944       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4264015374f91b531af599acfc367aa072b442eccc1ffead423255914a0d9f09] <==
	I1227 10:30:32.522002       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1227 10:30:32.522349       1 aggregator.go:187] initial CRD sync complete...
	I1227 10:30:32.522367       1 autoregister_controller.go:144] Starting autoregister controller
	I1227 10:30:32.522373       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1227 10:30:32.522378       1 cache.go:39] Caches are synced for autoregister controller
	I1227 10:30:32.532126       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1227 10:30:32.532150       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1227 10:30:32.532257       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1227 10:30:32.532330       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1227 10:30:32.538632       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1227 10:30:32.548932       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1227 10:30:32.570346       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:30:32.580202       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:30:32.595951       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:30:32.942535       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:30:33.053742       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:30:33.231386       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:30:33.393731       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:30:33.515996       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:30:33.568881       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:30:33.858248       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.254.7"}
	I1227 10:30:33.894082       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.174.251"}
	I1227 10:30:35.969260       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:30:36.019783       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:30:36.064907       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5ef714a1055a6cf93a2f1f0f649e4d4fa6f789af9150c2755a1c2d09b53037b1] <==
	I1227 10:30:35.480852       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481019       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481030       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481038       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.484732       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481083       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.484837       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:30:35.484888       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:30:35.484898       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:30:35.484904       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481091       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481097       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481103       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481109       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481129       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481046       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481052       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.481058       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.498971       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:30:35.481077       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.529134       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.584166       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:35.584318       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:30:35.584350       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:30:35.601441       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [c80de0856a7e97dcdca435688c0cce0be0c03d163eb8a2f5a8dcb13ec35e129d] <==
	I1227 10:30:33.685218       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:30:33.868806       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:30:33.969132       1 shared_informer.go:377] "Caches are synced"
	I1227 10:30:33.969178       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1227 10:30:33.969277       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:30:34.143929       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:30:34.145234       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:30:34.176701       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:30:34.177092       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:30:34.177118       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:30:34.180864       1 config.go:200] "Starting service config controller"
	I1227 10:30:34.180887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:30:34.180910       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:30:34.180914       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:30:34.180925       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:30:34.180929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:30:34.181563       1 config.go:309] "Starting node config controller"
	I1227 10:30:34.181617       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:30:34.181625       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:30:34.281294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:30:34.281488       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:30:34.282634       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0be2bd393e285cb49c8e5b5f66063ce6781e934558ad30c47aa3aec488565ab9] <==
	I1227 10:30:30.421700       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:30:32.248923       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:30:32.248955       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1227 10:30:32.248965       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:30:32.248982       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:30:32.401863       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:30:32.401900       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:30:32.414773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:30:32.414997       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:30:32.415043       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:30:32.415081       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:30:32.515222       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:30:47 no-preload-241090 kubelet[776]: I1227 10:30:47.829839     776 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-8fsf7" podStartSLOduration=4.452244644 podStartE2EDuration="11.829822604s" podCreationTimestamp="2025-12-27 10:30:36 +0000 UTC" firstStartedPulling="2025-12-27 10:30:36.573604434 +0000 UTC m=+8.828413173" lastFinishedPulling="2025-12-27 10:30:43.951182393 +0000 UTC m=+16.205991133" observedRunningTime="2025-12-27 10:30:44.176789985 +0000 UTC m=+16.431598733" watchObservedRunningTime="2025-12-27 10:30:47.829822604 +0000 UTC m=+20.084631344"
	Dec 27 10:30:48 no-preload-241090 kubelet[776]: E1227 10:30:48.181526     776 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-241090" containerName="kube-apiserver"
	Dec 27 10:30:50 no-preload-241090 kubelet[776]: E1227 10:30:50.188744     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:50 no-preload-241090 kubelet[776]: I1227 10:30:50.188790     776 scope.go:122] "RemoveContainer" containerID="98423e1437c047022ba4f3d40d968606a61840076744cacfa8f6f227f945c1ed"
	Dec 27 10:30:51 no-preload-241090 kubelet[776]: I1227 10:30:51.192988     776 scope.go:122] "RemoveContainer" containerID="98423e1437c047022ba4f3d40d968606a61840076744cacfa8f6f227f945c1ed"
	Dec 27 10:30:51 no-preload-241090 kubelet[776]: E1227 10:30:51.193168     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:51 no-preload-241090 kubelet[776]: I1227 10:30:51.193607     776 scope.go:122] "RemoveContainer" containerID="32bac02e49857af8d168ba9a94075c50b8a556253cfe63d3529fed32548e45da"
	Dec 27 10:30:51 no-preload-241090 kubelet[776]: E1227 10:30:51.193795     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8pk5v_kubernetes-dashboard(0bb73329-3545-4a00-bed6-1f34345ded26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" podUID="0bb73329-3545-4a00-bed6-1f34345ded26"
	Dec 27 10:30:52 no-preload-241090 kubelet[776]: E1227 10:30:52.198066     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:30:52 no-preload-241090 kubelet[776]: I1227 10:30:52.198572     776 scope.go:122] "RemoveContainer" containerID="32bac02e49857af8d168ba9a94075c50b8a556253cfe63d3529fed32548e45da"
	Dec 27 10:30:52 no-preload-241090 kubelet[776]: E1227 10:30:52.198830     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8pk5v_kubernetes-dashboard(0bb73329-3545-4a00-bed6-1f34345ded26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" podUID="0bb73329-3545-4a00-bed6-1f34345ded26"
	Dec 27 10:31:00 no-preload-241090 kubelet[776]: E1227 10:31:00.939400     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:31:00 no-preload-241090 kubelet[776]: I1227 10:31:00.939907     776 scope.go:122] "RemoveContainer" containerID="32bac02e49857af8d168ba9a94075c50b8a556253cfe63d3529fed32548e45da"
	Dec 27 10:31:01 no-preload-241090 kubelet[776]: I1227 10:31:01.236188     776 scope.go:122] "RemoveContainer" containerID="32bac02e49857af8d168ba9a94075c50b8a556253cfe63d3529fed32548e45da"
	Dec 27 10:31:01 no-preload-241090 kubelet[776]: E1227 10:31:01.236561     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:31:01 no-preload-241090 kubelet[776]: I1227 10:31:01.237168     776 scope.go:122] "RemoveContainer" containerID="f85ff0f366cab3121436ea435162a86c87b97787dc26bbbc8b0dd95316f338c4"
	Dec 27 10:31:01 no-preload-241090 kubelet[776]: E1227 10:31:01.237465     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8pk5v_kubernetes-dashboard(0bb73329-3545-4a00-bed6-1f34345ded26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" podUID="0bb73329-3545-4a00-bed6-1f34345ded26"
	Dec 27 10:31:04 no-preload-241090 kubelet[776]: I1227 10:31:04.246711     776 scope.go:122] "RemoveContainer" containerID="72ffaa9a1bdb7fd593a1ecea6b92e25f4d4fa7299fed5ff41307fd00c3e24018"
	Dec 27 10:31:08 no-preload-241090 kubelet[776]: E1227 10:31:08.458027     776 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5p545" containerName="coredns"
	Dec 27 10:31:10 no-preload-241090 kubelet[776]: E1227 10:31:10.939264     776 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" containerName="dashboard-metrics-scraper"
	Dec 27 10:31:10 no-preload-241090 kubelet[776]: I1227 10:31:10.939764     776 scope.go:122] "RemoveContainer" containerID="f85ff0f366cab3121436ea435162a86c87b97787dc26bbbc8b0dd95316f338c4"
	Dec 27 10:31:10 no-preload-241090 kubelet[776]: E1227 10:31:10.940072     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-8pk5v_kubernetes-dashboard(0bb73329-3545-4a00-bed6-1f34345ded26)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-8pk5v" podUID="0bb73329-3545-4a00-bed6-1f34345ded26"
	Dec 27 10:31:22 no-preload-241090 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:31:23 no-preload-241090 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:31:23 no-preload-241090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d6dac70955d6676da0bde6507fddc874505c8c28166dd15dd89c7d34aac1b578] <==
	2025/12/27 10:30:44 Using namespace: kubernetes-dashboard
	2025/12/27 10:30:44 Using in-cluster config to connect to apiserver
	2025/12/27 10:30:44 Using secret token for csrf signing
	2025/12/27 10:30:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/27 10:30:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/27 10:30:44 Successful initial request to the apiserver, version: v1.35.0
	2025/12/27 10:30:44 Generating JWE encryption key
	2025/12/27 10:30:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/27 10:30:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/27 10:30:44 Initializing JWE encryption key from synchronized object
	2025/12/27 10:30:44 Creating in-cluster Sidecar client
	2025/12/27 10:30:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:30:44 Serving insecurely on HTTP port: 9090
	2025/12/27 10:31:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/27 10:30:44 Starting overwatch
	
	
	==> storage-provisioner [72ffaa9a1bdb7fd593a1ecea6b92e25f4d4fa7299fed5ff41307fd00c3e24018] <==
	I1227 10:30:33.742312       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1227 10:31:03.748428       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f2e7e72f94b16b04dea768314e389b160ce23c6afffff0e6783d3bb4cd88b99d] <==
	I1227 10:31:04.329657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1227 10:31:04.343123       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1227 10:31:04.344573       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1227 10:31:04.348294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:07.808196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:12.075854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:15.674378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:18.727999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:21.750042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:21.764279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:31:21.764518       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1227 10:31:21.764832       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-241090_16b0c62f-838c-4f9f-90de-297e4e94d598!
	I1227 10:31:21.772947       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b161aa6-6257-4755-8180-933059c7757e", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-241090_16b0c62f-838c-4f9f-90de-297e4e94d598 became leader
	W1227 10:31:21.791086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:21.827632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1227 10:31:21.874490       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-241090_16b0c62f-838c-4f9f-90de-297e4e94d598!
	W1227 10:31:23.831016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:23.836641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:25.847439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:25.853396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:27.860591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1227 10:31:27.865599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241090 -n no-preload-241090
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-241090 -n no-preload-241090: exit status 2 (364.176832ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-241090 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (8.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-443576 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-443576 --alsologtostderr -v=1: exit status 80 (2.287079653s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-443576 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:31:43.473741  522353 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:31:43.473926  522353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:31:43.473955  522353 out.go:374] Setting ErrFile to fd 2...
	I1227 10:31:43.473977  522353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:31:43.474382  522353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:31:43.474839  522353 out.go:368] Setting JSON to false
	I1227 10:31:43.474894  522353 mustload.go:66] Loading cluster: newest-cni-443576
	I1227 10:31:43.476390  522353 config.go:182] Loaded profile config "newest-cni-443576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:31:43.477205  522353 cli_runner.go:164] Run: docker container inspect newest-cni-443576 --format={{.State.Status}}
	I1227 10:31:43.538070  522353 host.go:66] Checking if "newest-cni-443576" exists ...
	I1227 10:31:43.538404  522353 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:31:43.671246  522353 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:31:43.657470269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:31:43.680268  522353 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1766719468-22158/minikube-v1.37.0-1766719468-22158-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1766719468-22158-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-443576 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1227 10:31:43.722296  522353 out.go:179] * Pausing node newest-cni-443576 ... 
	I1227 10:31:43.730901  522353 host.go:66] Checking if "newest-cni-443576" exists ...
	I1227 10:31:43.731367  522353 ssh_runner.go:195] Run: systemctl --version
	I1227 10:31:43.731456  522353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-443576
	I1227 10:31:43.764522  522353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/newest-cni-443576/id_rsa Username:docker}
	I1227 10:31:43.871354  522353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:31:43.911764  522353 pause.go:52] kubelet running: true
	I1227 10:31:43.911830  522353 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:31:44.179909  522353 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:31:44.180019  522353 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:31:44.284223  522353 cri.go:96] found id: "dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9"
	I1227 10:31:44.284246  522353 cri.go:96] found id: "521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d"
	I1227 10:31:44.284251  522353 cri.go:96] found id: "9d86a268fe4d18a181b8ab2bd18ad8d28c04e4290fc23cd5cd9adaf6b181c326"
	I1227 10:31:44.284255  522353 cri.go:96] found id: "17ebce0a96d9023699d1bbeb012c333be3ff9509ae89fe7ee066a6a756ba87fa"
	I1227 10:31:44.284259  522353 cri.go:96] found id: "ff3729c296e1ad6bd2cf7a96c1c320f1d95f0859554b86594219e09719044953"
	I1227 10:31:44.284280  522353 cri.go:96] found id: "9b49c8e0c5ad139af1dca95dd998c781b665b27fd74dac2b63d77d87a262b4a8"
	I1227 10:31:44.284285  522353 cri.go:96] found id: ""
	I1227 10:31:44.284337  522353 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:31:44.296108  522353 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:31:44Z" level=error msg="open /run/runc: no such file or directory"
	I1227 10:31:44.434422  522353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:31:44.447752  522353 pause.go:52] kubelet running: false
	I1227 10:31:44.447904  522353 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:31:44.651343  522353 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:31:44.651440  522353 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:31:44.736554  522353 cri.go:96] found id: "dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9"
	I1227 10:31:44.736575  522353 cri.go:96] found id: "521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d"
	I1227 10:31:44.736581  522353 cri.go:96] found id: "9d86a268fe4d18a181b8ab2bd18ad8d28c04e4290fc23cd5cd9adaf6b181c326"
	I1227 10:31:44.736586  522353 cri.go:96] found id: "17ebce0a96d9023699d1bbeb012c333be3ff9509ae89fe7ee066a6a756ba87fa"
	I1227 10:31:44.736590  522353 cri.go:96] found id: "ff3729c296e1ad6bd2cf7a96c1c320f1d95f0859554b86594219e09719044953"
	I1227 10:31:44.736594  522353 cri.go:96] found id: "9b49c8e0c5ad139af1dca95dd998c781b665b27fd74dac2b63d77d87a262b4a8"
	I1227 10:31:44.736597  522353 cri.go:96] found id: ""
	I1227 10:31:44.736640  522353 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:31:45.285486  522353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:31:45.317434  522353 pause.go:52] kubelet running: false
	I1227 10:31:45.317543  522353 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1227 10:31:45.531473  522353 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1227 10:31:45.531556  522353 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1227 10:31:45.613081  522353 cri.go:96] found id: "dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9"
	I1227 10:31:45.613102  522353 cri.go:96] found id: "521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d"
	I1227 10:31:45.613107  522353 cri.go:96] found id: "9d86a268fe4d18a181b8ab2bd18ad8d28c04e4290fc23cd5cd9adaf6b181c326"
	I1227 10:31:45.613111  522353 cri.go:96] found id: "17ebce0a96d9023699d1bbeb012c333be3ff9509ae89fe7ee066a6a756ba87fa"
	I1227 10:31:45.613114  522353 cri.go:96] found id: "ff3729c296e1ad6bd2cf7a96c1c320f1d95f0859554b86594219e09719044953"
	I1227 10:31:45.613118  522353 cri.go:96] found id: "9b49c8e0c5ad139af1dca95dd998c781b665b27fd74dac2b63d77d87a262b4a8"
	I1227 10:31:45.613121  522353 cri.go:96] found id: ""
	I1227 10:31:45.613170  522353 ssh_runner.go:195] Run: sudo runc list -f json
	I1227 10:31:45.632318  522353 out.go:203] 
	W1227 10:31:45.639386  522353 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T10:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1227 10:31:45.639415  522353 out.go:285] * 
	* 
	W1227 10:31:45.646069  522353 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:31:45.652539  522353 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-443576 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-443576
helpers_test.go:244: (dbg) docker inspect newest-cni-443576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979",
	        "Created": "2025-12-27T10:30:53.483860982Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519640,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:31:26.266590401Z",
	            "FinishedAt": "2025-12-27T10:31:25.191765885Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/hosts",
	        "LogPath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979-json.log",
	        "Name": "/newest-cni-443576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-443576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-443576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979",
	                "LowerDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-443576",
	                "Source": "/var/lib/docker/volumes/newest-cni-443576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-443576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-443576",
	                "name.minikube.sigs.k8s.io": "newest-cni-443576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "26769122de6e74b9ed47da7234be401f3dae9c9c6ee63c34f61b86308e9be478",
	            "SandboxKey": "/var/run/docker/netns/26769122de6e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-443576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:76:e1:4d:d9:58",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c76023b32637880f7809253f7e724cbfc74cd2ad7e3ca1594922140ba274d2b",
	                    "EndpointID": "ba55c7c3eb6b0342879831150e3bef8c35200ae4029acce7e30a0af94be03484",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-443576",
	                        "1f8734c86b7f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-443576 -n newest-cni-443576
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-443576 -n newest-cni-443576: exit status 2 (497.383629ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-443576 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-443576 logs -n 25: (1.351989591s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p no-preload-241090 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable dashboard -p no-preload-241090 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:31 UTC │
	│ image   │ embed-certs-367691 image list --format=json                                                                                                                                                                                                   │ embed-certs-367691                │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ pause   │ -p embed-certs-367691 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-367691                │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ delete  │ -p embed-certs-367691                                                                                                                                                                                                                         │ embed-certs-367691                │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ delete  │ -p embed-certs-367691                                                                                                                                                                                                                         │ embed-certs-367691                │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:31 UTC │
	│ addons  │ enable metrics-server -p newest-cni-443576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ image   │ no-preload-241090 image list --format=json                                                                                                                                                                                                    │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ pause   │ -p no-preload-241090 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ stop    │ -p newest-cni-443576 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ addons  │ enable dashboard -p newest-cni-443576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ delete  │ -p no-preload-241090                                                                                                                                                                                                                          │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ delete  │ -p no-preload-241090                                                                                                                                                                                                                          │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p test-preload-dl-gcs-589969 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-589969        │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-589969                                                                                                                                                                                                                 │ test-preload-dl-gcs-589969        │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p test-preload-dl-github-997750 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-997750     │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ image   │ newest-cni-443576 image list --format=json                                                                                                                                                                                                    │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ pause   │ -p newest-cni-443576 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ delete  │ -p test-preload-dl-github-997750                                                                                                                                                                                                              │ test-preload-dl-github-997750     │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-755434 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-755434 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-755434                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-755434 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p auto-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-785247                       │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:31:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:31:44.947289  522690 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:31:44.947961  522690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:31:44.948025  522690 out.go:374] Setting ErrFile to fd 2...
	I1227 10:31:44.948047  522690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:31:44.948354  522690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:31:44.948830  522690 out.go:368] Setting JSON to false
	I1227 10:31:44.949700  522690 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8058,"bootTime":1766823447,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:31:44.949799  522690 start.go:143] virtualization:  
	I1227 10:31:44.952935  522690 out.go:179] * [auto-785247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:31:44.956966  522690 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:31:44.957088  522690 notify.go:221] Checking for updates...
	I1227 10:31:44.963141  522690 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:31:44.966198  522690 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:31:44.969186  522690 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:31:44.972151  522690 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:31:44.975015  522690 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:31:44.978415  522690 config.go:182] Loaded profile config "newest-cni-443576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:31:44.978543  522690 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:31:45.013367  522690 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:31:45.013641  522690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:31:45.132742  522690 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:31:45.119099719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:31:45.132859  522690 docker.go:319] overlay module found
	I1227 10:31:45.151211  522690 out.go:179] * Using the docker driver based on user configuration
	I1227 10:31:45.161523  522690 start.go:309] selected driver: docker
	I1227 10:31:45.161550  522690 start.go:928] validating driver "docker" against <nil>
	I1227 10:31:45.161566  522690 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:31:45.162768  522690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:31:45.247576  522690 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:31:45.23416864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:31:45.247752  522690 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:31:45.248158  522690 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:31:45.251458  522690 out.go:179] * Using Docker driver with root privileges
	I1227 10:31:45.254941  522690 cni.go:84] Creating CNI manager for ""
	I1227 10:31:45.255099  522690 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:31:45.255116  522690 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:31:45.255223  522690 start.go:353] cluster config:
	{Name:auto-785247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-785247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1227 10:31:45.258700  522690 out.go:179] * Starting "auto-785247" primary control-plane node in "auto-785247" cluster
	I1227 10:31:45.261973  522690 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:31:45.265527  522690 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:31:45.271948  522690 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:31:45.272027  522690 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:31:45.272076  522690 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:31:45.272089  522690 cache.go:65] Caching tarball of preloaded images
	I1227 10:31:45.272227  522690 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:31:45.272245  522690 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:31:45.272406  522690 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/config.json ...
	I1227 10:31:45.272450  522690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/config.json: {Name:mkeb4678b36278f80604530ac1a025792d7132dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:45.315552  522690 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:31:45.315581  522690 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:31:45.315640  522690 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:31:45.315681  522690 start.go:360] acquireMachinesLock for auto-785247: {Name:mk39125bf7403ab32739efdb166a1a1e645c8b2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:31:45.315886  522690 start.go:364] duration metric: took 168.231µs to acquireMachinesLock for "auto-785247"
	I1227 10:31:45.316207  522690 start.go:93] Provisioning new machine with config: &{Name:auto-785247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-785247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:31:45.316354  522690 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.105135267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.113378107Z" level=info msg="Running pod sandbox: kube-system/kindnet-5d2fh/POD" id=525f1420-f299-40ab-949f-84c8ec6d526c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.113491709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.137456186Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=525f1420-f299-40ab-949f-84c8ec6d526c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.138641437Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c4021eed-c4f8-4958-a95b-011909b19209 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.14567373Z" level=info msg="Ran pod sandbox 473a5664dcd9d7d2fd73b479171af09425e93b8d3a8ac4178335852871202e17 with infra container: kube-system/kindnet-5d2fh/POD" id=525f1420-f299-40ab-949f-84c8ec6d526c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.147020977Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=a0cc1065-8d1c-4cda-8242-4fc8e91ea4c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.148265264Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b2863e8b-f845-440e-a4b7-7237f2d88fc4 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.14966262Z" level=info msg="Creating container: kube-system/kindnet-5d2fh/kindnet-cni" id=a5504ecb-b3e8-4090-b1ed-09fa5b1dbb75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.14989166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.173111359Z" level=info msg="Ran pod sandbox 4ec1adf8fc459865d4e5ca57ba360084d9bfa02d98119e789beb116d7eb1e412 with infra container: kube-system/kube-proxy-xj5vc/POD" id=c4021eed-c4f8-4958-a95b-011909b19209 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.174573963Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=1844e5e1-d48b-4bbb-916e-540272214a7d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.175874046Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=615f2298-333d-44fd-9f1c-86f7c1c4cc33 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.177153936Z" level=info msg="Creating container: kube-system/kube-proxy-xj5vc/kube-proxy" id=1d45656c-db35-48ee-b492-cde372c6ef0f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.177417684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.19163748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.202921246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.205750231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.206051091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.27454037Z" level=info msg="Created container dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9: kube-system/kube-proxy-xj5vc/kube-proxy" id=1d45656c-db35-48ee-b492-cde372c6ef0f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.275535531Z" level=info msg="Starting container: dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9" id=d5eae6c8-730b-4622-b159-e790486d113c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.276089679Z" level=info msg="Created container 521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d: kube-system/kindnet-5d2fh/kindnet-cni" id=a5504ecb-b3e8-4090-b1ed-09fa5b1dbb75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.277488872Z" level=info msg="Starting container: 521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d" id=ad5a8be3-38f5-4943-a58e-3cfd090fda77 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.284233433Z" level=info msg="Started container" PID=1076 containerID=dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9 description=kube-system/kube-proxy-xj5vc/kube-proxy id=d5eae6c8-730b-4622-b159-e790486d113c name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ec1adf8fc459865d4e5ca57ba360084d9bfa02d98119e789beb116d7eb1e412
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.284697463Z" level=info msg="Started container" PID=1077 containerID=521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d description=kube-system/kindnet-5d2fh/kindnet-cni id=ad5a8be3-38f5-4943-a58e-3cfd090fda77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=473a5664dcd9d7d2fd73b479171af09425e93b8d3a8ac4178335852871202e17
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	dfccd09882b3f       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   5 seconds ago       Running             kube-proxy                1                   4ec1adf8fc459       kube-proxy-xj5vc                            kube-system
	521b827abe2b3       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   5 seconds ago       Running             kindnet-cni               1                   473a5664dcd9d       kindnet-5d2fh                               kube-system
	9d86a268fe4d1       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   12 seconds ago      Running             kube-controller-manager   1                   515bf9b1b0810       kube-controller-manager-newest-cni-443576   kube-system
	17ebce0a96d90       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   12 seconds ago      Running             kube-scheduler            1                   8fb96fdfa74b4       kube-scheduler-newest-cni-443576            kube-system
	ff3729c296e1a       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   12 seconds ago      Running             kube-apiserver            1                   b92225d70bd05       kube-apiserver-newest-cni-443576            kube-system
	9b49c8e0c5ad1       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   12 seconds ago      Running             etcd                      1                   39579ae0a89dd       etcd-newest-cni-443576                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-443576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-443576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=newest-cni-443576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_31_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:31:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-443576
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:31:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:31:40 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:31:40 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:31:40 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 10:31:40 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-443576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                469dccd3-aab7-4ad3-8e7d-e13b529d966f
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-443576                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-5d2fh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-443576             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-443576    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-xj5vc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-443576             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node newest-cni-443576 event: Registered Node newest-cni-443576 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-443576 event: Registered Node newest-cni-443576 in Controller
	
	
	==> dmesg <==
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	[Dec27 10:28] overlayfs: idmapped layers are currently not supported
	[Dec27 10:29] overlayfs: idmapped layers are currently not supported
	[ +34.978626] overlayfs: idmapped layers are currently not supported
	[Dec27 10:30] overlayfs: idmapped layers are currently not supported
	[Dec27 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.977751] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9b49c8e0c5ad139af1dca95dd998c781b665b27fd74dac2b63d77d87a262b4a8] <==
	{"level":"info","ts":"2025-12-27T10:31:35.572554Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:31:35.572829Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:31:35.572947Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:31:35.989525Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:31:35.989595Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:31:35.989640Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:31:35.989652Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:31:35.989668Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:31:35.993374Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:31:35.993426Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:31:35.993446Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:31:35.993456Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:31:36.005900Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-443576 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:31:36.006113Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:31:36.013404Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:31:36.030988Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:31:36.032202Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:31:36.032225Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:31:36.112665Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:31:36.144875Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:31:36.181083Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	2025/12/27 10:31:44 WARNING: [core] [Server #3]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-12-27T10:31:44.178257Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.688765ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b\" limit:1 ","response":"range_response_count:1 size:3003"}
	{"level":"info","ts":"2025-12-27T10:31:44.178365Z","caller":"traceutil/trace.go:172","msg":"trace[1885251736] range","detail":"{range_begin:/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b; range_end:; response_count:1; response_revision:545; }","duration":"135.80509ms","start":"2025-12-27T10:31:44.042545Z","end":"2025-12-27T10:31:44.178350Z","steps":["trace[1885251736] 'agreement among raft nodes before linearized reading'  (duration: 110.122047ms)","trace[1885251736] 'range keys from in-memory index tree'  (duration: 25.53169ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T10:31:44.205778Z","caller":"traceutil/trace.go:172","msg":"trace[1239789975] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"110.258172ms","start":"2025-12-27T10:31:44.095506Z","end":"2025-12-27T10:31:44.205764Z","steps":["trace[1239789975] 'process raft request'  (duration: 110.14503ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:31:47 up  2:14,  0 user,  load average: 5.73, 3.40, 2.49
	Linux newest-cni-443576 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d] <==
	I1227 10:31:41.417590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:31:41.417822       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:31:41.417937       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:31:41.417949       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:31:41.417959       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:31:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:31:41.640622       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:31:41.640703       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:31:41.640738       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:31:41.641566       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [ff3729c296e1ad6bd2cf7a96c1c320f1d95f0859554b86594219e09719044953] <==
	I1227 10:31:39.980053       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:31:39.989567       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:39.989591       1 policy_source.go:248] refreshing policies
	E1227 10:31:39.989974       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:31:40.001040       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:31:40.022794       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:31:40.024109       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:31:40.583779       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:31:40.858447       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:31:40.946999       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:31:41.115655       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:31:41.287492       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:31:41.330622       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:31:41.532470       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.19.21"}
	I1227 10:31:41.585584       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.145.39"}
	I1227 10:31:43.519209       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:31:43.617640       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:31:43.702851       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:31:43.721163       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	{"level":"warn","ts":"2025-12-27T10:31:44.128428Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001890b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1227 10:31:44.129055       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1227 10:31:44.129083       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1227 10:31:44.129107       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 7.434µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1227 10:31:44.130282       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1227 10:31:44.130408       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.480261ms" method="PATCH" path="/api/v1/namespaces/kube-system/pods/kindnet-5d2fh/status" result=null
	
	
	==> kube-controller-manager [9d86a268fe4d18a181b8ab2bd18ad8d28c04e4290fc23cd5cd9adaf6b181c326] <==
	I1227 10:31:43.016721       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016729       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016744       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016758       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016764       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016771       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016777       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016782       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.018075       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.023545       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025865       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025882       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025889       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025902       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025909       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.054958       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:31:43.055017       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:31:43.055048       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:31:43.055078       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025916       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016751       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.100262       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.100369       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:31:43.100399       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:31:43.101124       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9] <==
	I1227 10:31:41.440821       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:31:41.751891       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:31:41.953004       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:41.953059       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:31:41.953150       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:31:42.076259       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:31:42.085786       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:31:42.096218       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:31:42.096701       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:31:42.096805       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:31:42.100740       1 config.go:200] "Starting service config controller"
	I1227 10:31:42.100855       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:31:42.100916       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:31:42.100971       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:31:42.101036       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:31:42.101074       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:31:42.101889       1 config.go:309] "Starting node config controller"
	I1227 10:31:42.132976       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:31:42.220796       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:31:42.304066       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:31:42.306274       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:31:42.306293       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [17ebce0a96d9023699d1bbeb012c333be3ff9509ae89fe7ee066a6a756ba87fa] <==
	I1227 10:31:38.657954       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:31:39.889058       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:31:39.889083       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1227 10:31:39.889102       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:31:39.889111       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:31:40.073183       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:31:40.073209       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:31:40.084650       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:31:40.096232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:31:40.096259       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:31:40.096298       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:31:40.297140       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.232563     737 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-443576"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.232591     737 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.233589     737 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.259750     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-443576\" already exists" pod="kube-system/kube-controller-manager-newest-cni-443576"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.259794     737 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-443576"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.277749     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-443576\" already exists" pod="kube-system/kube-scheduler-newest-cni-443576"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.777138     737 apiserver.go:52] "Watching apiserver"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.795555     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-443576" containerName="kube-apiserver"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.802927     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-443576" containerName="etcd"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.803267     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-443576" containerName="kube-scheduler"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.803522     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-443576" containerName="kube-controller-manager"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.888514     737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.927460     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50656616-4132-47e7-a39a-86fcb9ca8a73-xtables-lock\") pod \"kindnet-5d2fh\" (UID: \"50656616-4132-47e7-a39a-86fcb9ca8a73\") " pod="kube-system/kindnet-5d2fh"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.927530     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dda1f65f-8d91-4868-a723-87bf2ec5bef8-lib-modules\") pod \"kube-proxy-xj5vc\" (UID: \"dda1f65f-8d91-4868-a723-87bf2ec5bef8\") " pod="kube-system/kube-proxy-xj5vc"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.927577     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/50656616-4132-47e7-a39a-86fcb9ca8a73-cni-cfg\") pod \"kindnet-5d2fh\" (UID: \"50656616-4132-47e7-a39a-86fcb9ca8a73\") " pod="kube-system/kindnet-5d2fh"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.927601     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dda1f65f-8d91-4868-a723-87bf2ec5bef8-xtables-lock\") pod \"kube-proxy-xj5vc\" (UID: \"dda1f65f-8d91-4868-a723-87bf2ec5bef8\") " pod="kube-system/kube-proxy-xj5vc"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.927635     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50656616-4132-47e7-a39a-86fcb9ca8a73-lib-modules\") pod \"kindnet-5d2fh\" (UID: \"50656616-4132-47e7-a39a-86fcb9ca8a73\") " pod="kube-system/kindnet-5d2fh"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.989932     737 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:31:41 newest-cni-443576 kubelet[737]: W1227 10:31:41.143541     737 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/crio-473a5664dcd9d7d2fd73b479171af09425e93b8d3a8ac4178335852871202e17 WatchSource:0}: Error finding container 473a5664dcd9d7d2fd73b479171af09425e93b8d3a8ac4178335852871202e17: Status 404 returned error can't find the container with id 473a5664dcd9d7d2fd73b479171af09425e93b8d3a8ac4178335852871202e17
	Dec 27 10:31:41 newest-cni-443576 kubelet[737]: W1227 10:31:41.170577     737 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/crio-4ec1adf8fc459865d4e5ca57ba360084d9bfa02d98119e789beb116d7eb1e412 WatchSource:0}: Error finding container 4ec1adf8fc459865d4e5ca57ba360084d9bfa02d98119e789beb116d7eb1e412: Status 404 returned error can't find the container with id 4ec1adf8fc459865d4e5ca57ba360084d9bfa02d98119e789beb116d7eb1e412
	Dec 27 10:31:41 newest-cni-443576 kubelet[737]: E1227 10:31:41.361711     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-443576" containerName="etcd"
	Dec 27 10:31:41 newest-cni-443576 kubelet[737]: E1227 10:31:41.489706     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-443576" containerName="kube-scheduler"
	Dec 27 10:31:44 newest-cni-443576 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:31:44 newest-cni-443576 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:31:44 newest-cni-443576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-443576 -n newest-cni-443576
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-443576 -n newest-cni-443576: exit status 2 (491.962355ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-443576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-w5pw2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-j49zh kubernetes-dashboard-b84665fb8-fp67p
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-443576 describe pod coredns-7d764666f9-w5pw2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-j49zh kubernetes-dashboard-b84665fb8-fp67p
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-443576 describe pod coredns-7d764666f9-w5pw2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-j49zh kubernetes-dashboard-b84665fb8-fp67p: exit status 1 (109.901524ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-w5pw2" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-j49zh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-fp67p" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-443576 describe pod coredns-7d764666f9-w5pw2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-j49zh kubernetes-dashboard-b84665fb8-fp67p: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-443576
helpers_test.go:244: (dbg) docker inspect newest-cni-443576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979",
	        "Created": "2025-12-27T10:30:53.483860982Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519640,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:31:26.266590401Z",
	            "FinishedAt": "2025-12-27T10:31:25.191765885Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/hosts",
	        "LogPath": "/var/lib/docker/containers/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979-json.log",
	        "Name": "/newest-cni-443576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-443576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-443576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979",
	                "LowerDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058-init/diff:/var/lib/docker/overlay2/497217ac9ef1c1f1b82ea8d7581e8803e8a30cce98305363d99b1ecb1ccc4cac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1473f37293f3226a24aea7e9a4af72bf49e455aae80820ef773d24a2b6d5058/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-443576",
	                "Source": "/var/lib/docker/volumes/newest-cni-443576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-443576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-443576",
	                "name.minikube.sigs.k8s.io": "newest-cni-443576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "26769122de6e74b9ed47da7234be401f3dae9c9c6ee63c34f61b86308e9be478",
	            "SandboxKey": "/var/run/docker/netns/26769122de6e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-443576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:76:e1:4d:d9:58",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c76023b32637880f7809253f7e724cbfc74cd2ad7e3ca1594922140ba274d2b",
	                    "EndpointID": "ba55c7c3eb6b0342879831150e3bef8c35200ae4029acce7e30a0af94be03484",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-443576",
	                        "1f8734c86b7f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-443576 -n newest-cni-443576
E1227 10:31:48.405155  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:48.410593  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:48.420802  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:48.441098  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:48.484108  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:48.564422  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:48.724834  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-443576 -n newest-cni-443576: exit status 2 (472.526416ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-443576 logs -n 25
E1227 10:31:49.048159  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:49.688857  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-443576 logs -n 25: (1.905751879s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p no-preload-241090 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ addons  │ enable dashboard -p no-preload-241090 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:31 UTC │
	│ image   │ embed-certs-367691 image list --format=json                                                                                                                                                                                                   │ embed-certs-367691                │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ pause   │ -p embed-certs-367691 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-367691                │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │                     │
	│ delete  │ -p embed-certs-367691                                                                                                                                                                                                                         │ embed-certs-367691                │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ delete  │ -p embed-certs-367691                                                                                                                                                                                                                         │ embed-certs-367691                │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:30 UTC │
	│ start   │ -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:30 UTC │ 27 Dec 25 10:31 UTC │
	│ addons  │ enable metrics-server -p newest-cni-443576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ image   │ no-preload-241090 image list --format=json                                                                                                                                                                                                    │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ pause   │ -p no-preload-241090 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ stop    │ -p newest-cni-443576 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ addons  │ enable dashboard -p newest-cni-443576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ delete  │ -p no-preload-241090                                                                                                                                                                                                                          │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ delete  │ -p no-preload-241090                                                                                                                                                                                                                          │ no-preload-241090                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p test-preload-dl-gcs-589969 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-589969        │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-589969                                                                                                                                                                                                                 │ test-preload-dl-gcs-589969        │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p test-preload-dl-github-997750 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-997750     │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ image   │ newest-cni-443576 image list --format=json                                                                                                                                                                                                    │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ pause   │ -p newest-cni-443576 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-443576                 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ delete  │ -p test-preload-dl-github-997750                                                                                                                                                                                                              │ test-preload-dl-github-997750     │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-755434 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-755434 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-755434                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-755434 │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │ 27 Dec 25 10:31 UTC │
	│ start   │ -p auto-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-785247                       │ jenkins │ v1.37.0 │ 27 Dec 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:31:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:31:44.947289  522690 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:31:44.947961  522690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:31:44.948025  522690 out.go:374] Setting ErrFile to fd 2...
	I1227 10:31:44.948047  522690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:31:44.948354  522690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:31:44.948830  522690 out.go:368] Setting JSON to false
	I1227 10:31:44.949700  522690 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8058,"bootTime":1766823447,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:31:44.949799  522690 start.go:143] virtualization:  
	I1227 10:31:44.952935  522690 out.go:179] * [auto-785247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:31:44.956966  522690 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:31:44.957088  522690 notify.go:221] Checking for updates...
	I1227 10:31:44.963141  522690 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:31:44.966198  522690 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:31:44.969186  522690 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:31:44.972151  522690 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:31:44.975015  522690 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:31:44.978415  522690 config.go:182] Loaded profile config "newest-cni-443576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:31:44.978543  522690 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:31:45.013367  522690 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:31:45.013641  522690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:31:45.132742  522690 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:31:45.119099719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:31:45.132859  522690 docker.go:319] overlay module found
	I1227 10:31:45.151211  522690 out.go:179] * Using the docker driver based on user configuration
	I1227 10:31:45.161523  522690 start.go:309] selected driver: docker
	I1227 10:31:45.161550  522690 start.go:928] validating driver "docker" against <nil>
	I1227 10:31:45.161566  522690 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:31:45.162768  522690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:31:45.247576  522690 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:31:45.23416864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:31:45.247752  522690 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:31:45.248158  522690 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:31:45.251458  522690 out.go:179] * Using Docker driver with root privileges
	I1227 10:31:45.254941  522690 cni.go:84] Creating CNI manager for ""
	I1227 10:31:45.255099  522690 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 10:31:45.255116  522690 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:31:45.255223  522690 start.go:353] cluster config:
	{Name:auto-785247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-785247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I1227 10:31:45.258700  522690 out.go:179] * Starting "auto-785247" primary control-plane node in "auto-785247" cluster
	I1227 10:31:45.261973  522690 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 10:31:45.265527  522690 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:31:45.271948  522690 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I1227 10:31:45.272027  522690 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:31:45.272076  522690 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I1227 10:31:45.272089  522690 cache.go:65] Caching tarball of preloaded images
	I1227 10:31:45.272227  522690 preload.go:251] Found /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1227 10:31:45.272245  522690 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I1227 10:31:45.272406  522690 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/config.json ...
	I1227 10:31:45.272450  522690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/config.json: {Name:mkeb4678b36278f80604530ac1a025792d7132dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:31:45.315552  522690 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:31:45.315581  522690 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:31:45.315640  522690 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:31:45.315681  522690 start.go:360] acquireMachinesLock for auto-785247: {Name:mk39125bf7403ab32739efdb166a1a1e645c8b2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:31:45.315886  522690 start.go:364] duration metric: took 168.231µs to acquireMachinesLock for "auto-785247"
	I1227 10:31:45.316207  522690 start.go:93] Provisioning new machine with config: &{Name:auto-785247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-785247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1227 10:31:45.316354  522690 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.105135267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.113378107Z" level=info msg="Running pod sandbox: kube-system/kindnet-5d2fh/POD" id=525f1420-f299-40ab-949f-84c8ec6d526c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.113491709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.137456186Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=525f1420-f299-40ab-949f-84c8ec6d526c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.138641437Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c4021eed-c4f8-4958-a95b-011909b19209 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.14567373Z" level=info msg="Ran pod sandbox 473a5664dcd9d7d2fd73b479171af09425e93b8d3a8ac4178335852871202e17 with infra container: kube-system/kindnet-5d2fh/POD" id=525f1420-f299-40ab-949f-84c8ec6d526c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.147020977Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=a0cc1065-8d1c-4cda-8242-4fc8e91ea4c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.148265264Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=b2863e8b-f845-440e-a4b7-7237f2d88fc4 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.14966262Z" level=info msg="Creating container: kube-system/kindnet-5d2fh/kindnet-cni" id=a5504ecb-b3e8-4090-b1ed-09fa5b1dbb75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.14989166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.173111359Z" level=info msg="Ran pod sandbox 4ec1adf8fc459865d4e5ca57ba360084d9bfa02d98119e789beb116d7eb1e412 with infra container: kube-system/kube-proxy-xj5vc/POD" id=c4021eed-c4f8-4958-a95b-011909b19209 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.174573963Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=1844e5e1-d48b-4bbb-916e-540272214a7d name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.175874046Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=615f2298-333d-44fd-9f1c-86f7c1c4cc33 name=/runtime.v1.ImageService/ImageStatus
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.177153936Z" level=info msg="Creating container: kube-system/kube-proxy-xj5vc/kube-proxy" id=1d45656c-db35-48ee-b492-cde372c6ef0f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.177417684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.19163748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.202921246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.205750231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.206051091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.27454037Z" level=info msg="Created container dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9: kube-system/kube-proxy-xj5vc/kube-proxy" id=1d45656c-db35-48ee-b492-cde372c6ef0f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.275535531Z" level=info msg="Starting container: dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9" id=d5eae6c8-730b-4622-b159-e790486d113c name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.276089679Z" level=info msg="Created container 521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d: kube-system/kindnet-5d2fh/kindnet-cni" id=a5504ecb-b3e8-4090-b1ed-09fa5b1dbb75 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.277488872Z" level=info msg="Starting container: 521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d" id=ad5a8be3-38f5-4943-a58e-3cfd090fda77 name=/runtime.v1.RuntimeService/StartContainer
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.284233433Z" level=info msg="Started container" PID=1076 containerID=dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9 description=kube-system/kube-proxy-xj5vc/kube-proxy id=d5eae6c8-730b-4622-b159-e790486d113c name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ec1adf8fc459865d4e5ca57ba360084d9bfa02d98119e789beb116d7eb1e412
	Dec 27 10:31:41 newest-cni-443576 crio[615]: time="2025-12-27T10:31:41.284697463Z" level=info msg="Started container" PID=1077 containerID=521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d description=kube-system/kindnet-5d2fh/kindnet-cni id=ad5a8be3-38f5-4943-a58e-3cfd090fda77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=473a5664dcd9d7d2fd73b479171af09425e93b8d3a8ac4178335852871202e17
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	dfccd09882b3f       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   8 seconds ago       Running             kube-proxy                1                   4ec1adf8fc459       kube-proxy-xj5vc                            kube-system
	521b827abe2b3       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   8 seconds ago       Running             kindnet-cni               1                   473a5664dcd9d       kindnet-5d2fh                               kube-system
	9d86a268fe4d1       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   15 seconds ago      Running             kube-controller-manager   1                   515bf9b1b0810       kube-controller-manager-newest-cni-443576   kube-system
	17ebce0a96d90       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   15 seconds ago      Running             kube-scheduler            1                   8fb96fdfa74b4       kube-scheduler-newest-cni-443576            kube-system
	ff3729c296e1a       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   15 seconds ago      Running             kube-apiserver            1                   b92225d70bd05       kube-apiserver-newest-cni-443576            kube-system
	9b49c8e0c5ad1       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   15 seconds ago      Running             etcd                      1                   39579ae0a89dd       etcd-newest-cni-443576                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-443576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-443576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8
	                    minikube.k8s.io/name=newest-cni-443576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_27T10_31_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 27 Dec 2025 10:31:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-443576
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 27 Dec 2025 10:31:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 27 Dec 2025 10:31:40 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 27 Dec 2025 10:31:40 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 27 Dec 2025 10:31:40 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 27 Dec 2025 10:31:40 +0000   Sat, 27 Dec 2025 10:31:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-443576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56495bcb733b49eb3642bf15694bbc8c
	  System UUID:                469dccd3-aab7-4ad3-8e7d-e13b529d966f
	  Boot ID:                    60ce0489-ffa1-45f8-9eea-93a37de509ef
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-443576                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-5d2fh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-443576             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-newest-cni-443576    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-xj5vc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-443576             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  32s   node-controller  Node newest-cni-443576 event: Registered Node newest-cni-443576 in Controller
	  Normal  RegisteredNode  6s    node-controller  Node newest-cni-443576 event: Registered Node newest-cni-443576 in Controller
	
	
	==> dmesg <==
	[ +23.438672] overlayfs: idmapped layers are currently not supported
	[Dec27 10:02] overlayfs: idmapped layers are currently not supported
	[ +41.463005] overlayfs: idmapped layers are currently not supported
	[Dec27 10:03] overlayfs: idmapped layers are currently not supported
	[ +36.922231] overlayfs: idmapped layers are currently not supported
	[Dec27 10:04] overlayfs: idmapped layers are currently not supported
	[Dec27 10:06] overlayfs: idmapped layers are currently not supported
	[Dec27 10:08] overlayfs: idmapped layers are currently not supported
	[ +35.057670] overlayfs: idmapped layers are currently not supported
	[  +1.688587] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 10:13] overlayfs: idmapped layers are currently not supported
	[Dec27 10:14] overlayfs: idmapped layers are currently not supported
	[Dec27 10:15] overlayfs: idmapped layers are currently not supported
	[Dec27 10:16] overlayfs: idmapped layers are currently not supported
	[Dec27 10:23] overlayfs: idmapped layers are currently not supported
	[ +32.540986] overlayfs: idmapped layers are currently not supported
	[Dec27 10:24] overlayfs: idmapped layers are currently not supported
	[Dec27 10:26] overlayfs: idmapped layers are currently not supported
	[Dec27 10:27] overlayfs: idmapped layers are currently not supported
	[Dec27 10:28] overlayfs: idmapped layers are currently not supported
	[Dec27 10:29] overlayfs: idmapped layers are currently not supported
	[ +34.978626] overlayfs: idmapped layers are currently not supported
	[Dec27 10:30] overlayfs: idmapped layers are currently not supported
	[Dec27 10:31] overlayfs: idmapped layers are currently not supported
	[ +26.977751] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9b49c8e0c5ad139af1dca95dd998c781b665b27fd74dac2b63d77d87a262b4a8] <==
	{"level":"info","ts":"2025-12-27T10:31:35.572554Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-27T10:31:35.572829Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-27T10:31:35.572947Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-27T10:31:35.989525Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-27T10:31:35.989595Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-27T10:31:35.989640Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-27T10:31:35.989652Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:31:35.989668Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-27T10:31:35.993374Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:31:35.993426Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-27T10:31:35.993446Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-27T10:31:35.993456Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-27T10:31:36.005900Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-443576 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-27T10:31:36.006113Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:31:36.013404Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:31:36.030988Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-27T10:31:36.032202Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-27T10:31:36.032225Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-27T10:31:36.112665Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-27T10:31:36.144875Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-27T10:31:36.181083Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	2025/12/27 10:31:44 WARNING: [core] [Server #3]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2025-12-27T10:31:44.178257Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.688765ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b\" limit:1 ","response":"range_response_count:1 size:3003"}
	{"level":"info","ts":"2025-12-27T10:31:44.178365Z","caller":"traceutil/trace.go:172","msg":"trace[1885251736] range","detail":"{range_begin:/registry/replicasets/kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b; range_end:; response_count:1; response_revision:545; }","duration":"135.80509ms","start":"2025-12-27T10:31:44.042545Z","end":"2025-12-27T10:31:44.178350Z","steps":["trace[1885251736] 'agreement among raft nodes before linearized reading'  (duration: 110.122047ms)","trace[1885251736] 'range keys from in-memory index tree'  (duration: 25.53169ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-27T10:31:44.205778Z","caller":"traceutil/trace.go:172","msg":"trace[1239789975] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"110.258172ms","start":"2025-12-27T10:31:44.095506Z","end":"2025-12-27T10:31:44.205764Z","steps":["trace[1239789975] 'process raft request'  (duration: 110.14503ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:31:49 up  2:14,  0 user,  load average: 5.27, 3.34, 2.47
	Linux newest-cni-443576 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [521b827abe2b392db881f7bc48c103320048238c244ea38dca4034c4bf76584d] <==
	I1227 10:31:41.417590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1227 10:31:41.417822       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1227 10:31:41.417937       1 main.go:148] setting mtu 1500 for CNI 
	I1227 10:31:41.417949       1 main.go:178] kindnetd IP family: "ipv4"
	I1227 10:31:41.417959       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-27T10:31:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1227 10:31:41.640622       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1227 10:31:41.640703       1 controller.go:381] "Waiting for informer caches to sync"
	I1227 10:31:41.640738       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1227 10:31:41.641566       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [ff3729c296e1ad6bd2cf7a96c1c320f1d95f0859554b86594219e09719044953] <==
	I1227 10:31:39.980053       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1227 10:31:39.989567       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:39.989591       1 policy_source.go:248] refreshing policies
	E1227 10:31:39.989974       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1227 10:31:40.001040       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1227 10:31:40.022794       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1227 10:31:40.024109       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1227 10:31:40.583779       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1227 10:31:40.858447       1 controller.go:667] quota admission added evaluator for: namespaces
	I1227 10:31:40.946999       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1227 10:31:41.115655       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1227 10:31:41.287492       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1227 10:31:41.330622       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1227 10:31:41.532470       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.19.21"}
	I1227 10:31:41.585584       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.145.39"}
	I1227 10:31:43.519209       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1227 10:31:43.617640       1 controller.go:667] quota admission added evaluator for: endpoints
	I1227 10:31:43.702851       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1227 10:31:43.721163       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	{"level":"warn","ts":"2025-12-27T10:31:44.128428Z","logger":"etcd-client","caller":"v3@v3.6.5/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001890b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1227 10:31:44.129055       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1227 10:31:44.129083       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1227 10:31:44.129107       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 7.434µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1227 10:31:44.130282       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1227 10:31:44.130408       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.480261ms" method="PATCH" path="/api/v1/namespaces/kube-system/pods/kindnet-5d2fh/status" result=null
	
	
	==> kube-controller-manager [9d86a268fe4d18a181b8ab2bd18ad8d28c04e4290fc23cd5cd9adaf6b181c326] <==
	I1227 10:31:43.016721       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016729       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016744       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016758       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016764       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016771       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016777       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016782       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.018075       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.023545       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025865       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025882       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025889       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025902       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025909       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.054958       1 range_allocator.go:177] "Sending events to api server"
	I1227 10:31:43.055017       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1227 10:31:43.055048       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:31:43.055078       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.025916       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.016751       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.100262       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:43.100369       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1227 10:31:43.100399       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1227 10:31:43.101124       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [dfccd09882b3fa648e479d16b7b56e145040cfe442a06ea057171381ff817ef9] <==
	I1227 10:31:41.440821       1 server_linux.go:53] "Using iptables proxy"
	I1227 10:31:41.751891       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:31:41.953004       1 shared_informer.go:377] "Caches are synced"
	I1227 10:31:41.953059       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1227 10:31:41.953150       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1227 10:31:42.076259       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1227 10:31:42.085786       1 server_linux.go:136] "Using iptables Proxier"
	I1227 10:31:42.096218       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1227 10:31:42.096701       1 server.go:529] "Version info" version="v1.35.0"
	I1227 10:31:42.096805       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:31:42.100740       1 config.go:200] "Starting service config controller"
	I1227 10:31:42.100855       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1227 10:31:42.100916       1 config.go:106] "Starting endpoint slice config controller"
	I1227 10:31:42.100971       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1227 10:31:42.101036       1 config.go:403] "Starting serviceCIDR config controller"
	I1227 10:31:42.101074       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1227 10:31:42.101889       1 config.go:309] "Starting node config controller"
	I1227 10:31:42.132976       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1227 10:31:42.220796       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1227 10:31:42.304066       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1227 10:31:42.306274       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1227 10:31:42.306293       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [17ebce0a96d9023699d1bbeb012c333be3ff9509ae89fe7ee066a6a756ba87fa] <==
	I1227 10:31:38.657954       1 serving.go:386] Generated self-signed cert in-memory
	W1227 10:31:39.889058       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1227 10:31:39.889083       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1227 10:31:39.889102       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1227 10:31:39.889111       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1227 10:31:40.073183       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I1227 10:31:40.073209       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1227 10:31:40.084650       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1227 10:31:40.096232       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1227 10:31:40.096259       1 shared_informer.go:370] "Waiting for caches to sync"
	I1227 10:31:40.096298       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1227 10:31:40.297140       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.232563     737 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-443576"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.232591     737 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.233589     737 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.259750     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-443576\" already exists" pod="kube-system/kube-controller-manager-newest-cni-443576"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.259794     737 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-443576"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.277749     737 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-443576\" already exists" pod="kube-system/kube-scheduler-newest-cni-443576"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.777138     737 apiserver.go:52] "Watching apiserver"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.795555     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-443576" containerName="kube-apiserver"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.802927     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-443576" containerName="etcd"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.803267     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-443576" containerName="kube-scheduler"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: E1227 10:31:40.803522     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-443576" containerName="kube-controller-manager"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.888514     737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.927460     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50656616-4132-47e7-a39a-86fcb9ca8a73-xtables-lock\") pod \"kindnet-5d2fh\" (UID: \"50656616-4132-47e7-a39a-86fcb9ca8a73\") " pod="kube-system/kindnet-5d2fh"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.927530     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dda1f65f-8d91-4868-a723-87bf2ec5bef8-lib-modules\") pod \"kube-proxy-xj5vc\" (UID: \"dda1f65f-8d91-4868-a723-87bf2ec5bef8\") " pod="kube-system/kube-proxy-xj5vc"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.927577     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/50656616-4132-47e7-a39a-86fcb9ca8a73-cni-cfg\") pod \"kindnet-5d2fh\" (UID: \"50656616-4132-47e7-a39a-86fcb9ca8a73\") " pod="kube-system/kindnet-5d2fh"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.927601     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dda1f65f-8d91-4868-a723-87bf2ec5bef8-xtables-lock\") pod \"kube-proxy-xj5vc\" (UID: \"dda1f65f-8d91-4868-a723-87bf2ec5bef8\") " pod="kube-system/kube-proxy-xj5vc"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.927635     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50656616-4132-47e7-a39a-86fcb9ca8a73-lib-modules\") pod \"kindnet-5d2fh\" (UID: \"50656616-4132-47e7-a39a-86fcb9ca8a73\") " pod="kube-system/kindnet-5d2fh"
	Dec 27 10:31:40 newest-cni-443576 kubelet[737]: I1227 10:31:40.989932     737 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 27 10:31:41 newest-cni-443576 kubelet[737]: W1227 10:31:41.143541     737 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/crio-473a5664dcd9d7d2fd73b479171af09425e93b8d3a8ac4178335852871202e17 WatchSource:0}: Error finding container 473a5664dcd9d7d2fd73b479171af09425e93b8d3a8ac4178335852871202e17: Status 404 returned error can't find the container with id 473a5664dcd9d7d2fd73b479171af09425e93b8d3a8ac4178335852871202e17
	Dec 27 10:31:41 newest-cni-443576 kubelet[737]: W1227 10:31:41.170577     737 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/1f8734c86b7f2a072fb746aaa38394b1df075ec06be28d71558eef3d12478979/crio-4ec1adf8fc459865d4e5ca57ba360084d9bfa02d98119e789beb116d7eb1e412 WatchSource:0}: Error finding container 4ec1adf8fc459865d4e5ca57ba360084d9bfa02d98119e789beb116d7eb1e412: Status 404 returned error can't find the container with id 4ec1adf8fc459865d4e5ca57ba360084d9bfa02d98119e789beb116d7eb1e412
	Dec 27 10:31:41 newest-cni-443576 kubelet[737]: E1227 10:31:41.361711     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-443576" containerName="etcd"
	Dec 27 10:31:41 newest-cni-443576 kubelet[737]: E1227 10:31:41.489706     737 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-443576" containerName="kube-scheduler"
	Dec 27 10:31:44 newest-cni-443576 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 27 10:31:44 newest-cni-443576 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 27 10:31:44 newest-cni-443576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-443576 -n newest-cni-443576
E1227 10:31:50.969520  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-443576 -n newest-cni-443576: exit status 2 (449.499868ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-443576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-w5pw2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-j49zh kubernetes-dashboard-b84665fb8-fp67p
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-443576 describe pod coredns-7d764666f9-w5pw2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-j49zh kubernetes-dashboard-b84665fb8-fp67p
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-443576 describe pod coredns-7d764666f9-w5pw2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-j49zh kubernetes-dashboard-b84665fb8-fp67p: exit status 1 (163.330926ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-w5pw2" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-j49zh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-fp67p" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-443576 describe pod coredns-7d764666f9-w5pw2 storage-provisioner dashboard-metrics-scraper-867fb5f87b-j49zh kubernetes-dashboard-b84665fb8-fp67p: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (8.03s)
E1227 10:37:15.339218  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:16.093544  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:39.324180  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:39.753918  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:39.759266  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:39.769640  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:39.790000  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:39.830276  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:39.910551  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:40.070967  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:40.391397  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:41.032476  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:42.312762  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:44.873735  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:45.886475  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:45.891809  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:45.902103  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:45.922502  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:45.962867  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:46.043260  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:46.203680  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:46.523876  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:47.164864  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:48.445166  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:49.994442  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:51.005374  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:37:56.126555  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:38:00.235690  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/auto-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (270/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.57
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.35.0/json-events 3.53
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.18
18 TestDownloadOnly/v1.35.0/DeleteAll 0.27
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.24
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 144.55
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 9.82
48 TestAddons/StoppedEnableDisable 12.43
49 TestCertOptions 29.46
50 TestCertExpiration 237.4
58 TestErrorSpam/setup 26.63
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.13
61 TestErrorSpam/pause 6.57
62 TestErrorSpam/unpause 4.55
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 46.68
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.15
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
75 TestFunctional/serial/CacheCmd/cache/add_local 1.24
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 31.01
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.44
86 TestFunctional/serial/LogsFileCmd 1.49
87 TestFunctional/serial/InvalidService 4.74
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 12.92
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.14
97 TestFunctional/parallel/ServiceCmdConnect 8.58
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 20.95
101 TestFunctional/parallel/SSHCmd 0.75
102 TestFunctional/parallel/CpCmd 2.42
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 2.33
109 TestFunctional/parallel/NodeLabels 0.13
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
113 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.43
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 8.2
130 TestFunctional/parallel/ServiceCmd/List 0.52
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.37
135 TestFunctional/parallel/MountCmd/specific-port 2.22
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.61
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 0.63
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.01
144 TestFunctional/parallel/ImageCommands/Setup 0.67
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.1
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.03
162 TestMultiControlPlane/serial/StartCluster 161.42
163 TestMultiControlPlane/serial/DeployApp 37.37
164 TestMultiControlPlane/serial/PingHostFromPods 1.49
165 TestMultiControlPlane/serial/AddWorkerNode 31.71
166 TestMultiControlPlane/serial/NodeLabels 0.13
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 20.11
169 TestMultiControlPlane/serial/StopSecondaryNode 13.26
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
171 TestMultiControlPlane/serial/RestartSecondaryNode 21.61
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.23
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 124.38
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.14
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
176 TestMultiControlPlane/serial/StopCluster 36.15
185 TestJSONOutput/start/Command 47.79
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.85
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 35.93
211 TestKicCustomNetwork/use_default_bridge_network 30.25
212 TestKicExistingNetwork 31.89
213 TestKicCustomSubnet 30.51
214 TestKicStaticIP 32.1
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 59.78
219 TestMountStart/serial/StartWithMountFirst 9.04
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 6.3
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.02
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 74.94
231 TestMultiNode/serial/DeployApp2Nodes 5.36
232 TestMultiNode/serial/PingHostFrom2Pods 0.94
233 TestMultiNode/serial/AddNode 28.59
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.74
236 TestMultiNode/serial/CopyFile 10.77
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 8.54
239 TestMultiNode/serial/RestartKeepsNodes 71.02
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 24.14
242 TestMultiNode/serial/RestartMultiNode 52.75
243 TestMultiNode/serial/ValidateNameConflict 30.54
250 TestScheduledStopUnix 102.07
253 TestInsufficientStorage 12.62
254 TestRunningBinaryUpgrade 302.92
256 TestKubernetesUpgrade 107.49
257 TestMissingContainerUpgrade 110.48
259 TestPause/serial/Start 55.19
260 TestPause/serial/SecondStartNoReconfiguration 27.66
262 TestStoppedBinaryUpgrade/Setup 0.8
263 TestStoppedBinaryUpgrade/Upgrade 319.77
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.55
272 TestPreload/Start-NoPreload-PullImage 66.55
273 TestPreload/Restart-With-Preload-Check-User-Image 56.12
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
277 TestNoKubernetes/serial/StartWithK8s 27.33
278 TestNoKubernetes/serial/StartWithStopK8s 10.1
279 TestNoKubernetes/serial/Start 7.76
280 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
282 TestNoKubernetes/serial/ProfileList 1.05
283 TestNoKubernetes/serial/Stop 1.31
284 TestNoKubernetes/serial/StartNoArgs 6.9
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
293 TestNetworkPlugins/group/false 3.69
298 TestStartStop/group/old-k8s-version/serial/FirstStart 58.1
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.42
301 TestStartStop/group/old-k8s-version/serial/Stop 12.06
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
303 TestStartStop/group/old-k8s-version/serial/SecondStart 52.62
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
309 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.79
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.33
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.27
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.57
315 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
317 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/embed-certs/serial/FirstStart 48.19
322 TestStartStop/group/no-preload/serial/FirstStart 56.89
323 TestStartStop/group/embed-certs/serial/DeployApp 8.42
325 TestStartStop/group/embed-certs/serial/Stop 12.23
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
327 TestStartStop/group/embed-certs/serial/SecondStart 49.36
328 TestStartStop/group/no-preload/serial/DeployApp 10.36
330 TestStartStop/group/no-preload/serial/Stop 12.03
331 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
332 TestStartStop/group/no-preload/serial/SecondStart 50.19
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
338 TestStartStop/group/newest-cni/serial/FirstStart 33.06
339 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
340 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.26
341 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
345 TestStartStop/group/newest-cni/serial/Stop 1.48
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
347 TestStartStop/group/newest-cni/serial/SecondStart 17.13
348 TestPreload/PreloadSrc/gcs 4.96
349 TestPreload/PreloadSrc/github 5.68
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.45
354 TestPreload/PreloadSrc/gcs-cached 1.05
355 TestNetworkPlugins/group/auto/Start 54.26
356 TestNetworkPlugins/group/kindnet/Start 51.84
357 TestNetworkPlugins/group/auto/KubeletFlags 0.3
358 TestNetworkPlugins/group/auto/NetCatPod 11.34
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/auto/DNS 0.16
361 TestNetworkPlugins/group/auto/Localhost 0.15
362 TestNetworkPlugins/group/auto/HairPin 0.15
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
364 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
365 TestNetworkPlugins/group/kindnet/DNS 0.27
366 TestNetworkPlugins/group/kindnet/Localhost 0.21
367 TestNetworkPlugins/group/kindnet/HairPin 0.19
368 TestNetworkPlugins/group/calico/Start 75.96
369 TestNetworkPlugins/group/custom-flannel/Start 64.18
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.32
373 TestNetworkPlugins/group/calico/KubeletFlags 0.42
374 TestNetworkPlugins/group/calico/NetCatPod 10.36
375 TestNetworkPlugins/group/custom-flannel/DNS 0.17
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
378 TestNetworkPlugins/group/calico/DNS 0.18
379 TestNetworkPlugins/group/calico/Localhost 0.15
380 TestNetworkPlugins/group/calico/HairPin 0.14
381 TestNetworkPlugins/group/enable-default-cni/Start 71.12
382 TestNetworkPlugins/group/flannel/Start 56.56
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
385 TestNetworkPlugins/group/flannel/NetCatPod 11.28
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.39
388 TestNetworkPlugins/group/flannel/DNS 0.19
389 TestNetworkPlugins/group/flannel/Localhost 0.14
390 TestNetworkPlugins/group/flannel/HairPin 0.14
391 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
392 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
393 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
394 TestNetworkPlugins/group/bridge/Start 65.71
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
396 TestNetworkPlugins/group/bridge/NetCatPod 10.27
397 TestNetworkPlugins/group/bridge/DNS 0.15
398 TestNetworkPlugins/group/bridge/Localhost 0.13
399 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (6.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-259204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-259204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.572630859s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 09:29:43.238587  299811 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1227 09:29:43.238695  299811 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-259204
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-259204: exit status 85 (88.869182ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-259204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-259204 │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:29:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:29:36.713857  299817 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:29:36.713964  299817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:36.713974  299817 out.go:374] Setting ErrFile to fd 2...
	I1227 09:29:36.713979  299817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:36.714335  299817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	W1227 09:29:36.714500  299817 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22343-297941/.minikube/config/config.json: open /home/jenkins/minikube-integration/22343-297941/.minikube/config/config.json: no such file or directory
	I1227 09:29:36.714916  299817 out.go:368] Setting JSON to true
	I1227 09:29:36.715754  299817 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4330,"bootTime":1766823447,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:29:36.715850  299817 start.go:143] virtualization:  
	I1227 09:29:36.721886  299817 out.go:99] [download-only-259204] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1227 09:29:36.722088  299817 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 09:29:36.722207  299817 notify.go:221] Checking for updates...
	I1227 09:29:36.725954  299817 out.go:171] MINIKUBE_LOCATION=22343
	I1227 09:29:36.729065  299817 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:29:36.732153  299817 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:29:36.735159  299817 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 09:29:36.738151  299817 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 09:29:36.743943  299817 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 09:29:36.744237  299817 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:29:36.775107  299817 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:29:36.775224  299817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:29:36.832031  299817 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 09:29:36.822724519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:29:36.832132  299817 docker.go:319] overlay module found
	I1227 09:29:36.835182  299817 out.go:99] Using the docker driver based on user configuration
	I1227 09:29:36.835225  299817 start.go:309] selected driver: docker
	I1227 09:29:36.835233  299817 start.go:928] validating driver "docker" against <nil>
	I1227 09:29:36.835336  299817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:29:36.897999  299817 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 09:29:36.888701366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:29:36.898140  299817 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:29:36.898418  299817 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 09:29:36.898562  299817 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:29:36.901768  299817 out.go:171] Using Docker driver with root privileges
	I1227 09:29:36.904736  299817 cni.go:84] Creating CNI manager for ""
	I1227 09:29:36.904804  299817 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1227 09:29:36.904818  299817 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:29:36.904896  299817 start.go:353] cluster config:
	{Name:download-only-259204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-259204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:29:36.907938  299817 out.go:99] Starting "download-only-259204" primary control-plane node in "download-only-259204" cluster
	I1227 09:29:36.907958  299817 cache.go:134] Beginning downloading kic base image for docker with crio
	I1227 09:29:36.910889  299817 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:29:36.910935  299817 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:29:36.911093  299817 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:29:36.926520  299817 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:29:36.926688  299817 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 09:29:36.926777  299817 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:29:36.960038  299817 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:29:36.960066  299817 cache.go:65] Caching tarball of preloaded images
	I1227 09:29:36.960256  299817 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:29:36.963592  299817 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 09:29:36.963633  299817 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:29:36.963641  299817 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1227 09:29:37.047619  299817 preload.go:313] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1227 09:29:37.047795  299817 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1227 09:29:39.830340  299817 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1227 09:29:39.830747  299817 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/download-only-259204/config.json ...
	I1227 09:29:39.830788  299817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/download-only-259204/config.json: {Name:mk417f3aa65a462ba787f360878739467ebba503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:29:39.830981  299817 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1227 09:29:39.831199  299817 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22343-297941/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-259204 host does not exist
	  To start a cluster, run: "minikube start -p download-only-259204"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-259204
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-787419 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-787419 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.526222523s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 09:29:47.209933  299811 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I1227 09:29:47.209978  299811 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-787419
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-787419: exit status 85 (177.027884ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-259204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-259204 │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:29 UTC │
	│ delete  │ -p download-only-259204                                                                                                                                                   │ download-only-259204 │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │ 27 Dec 25 09:29 UTC │
	│ start   │ -o=json --download-only -p download-only-787419 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-787419 │ jenkins │ v1.37.0 │ 27 Dec 25 09:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:29:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:29:43.726847  300017 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:29:43.726976  300017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:43.726991  300017 out.go:374] Setting ErrFile to fd 2...
	I1227 09:29:43.726998  300017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:29:43.727401  300017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:29:43.727906  300017 out.go:368] Setting JSON to true
	I1227 09:29:43.728921  300017 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4337,"bootTime":1766823447,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:29:43.729010  300017 start.go:143] virtualization:  
	I1227 09:29:43.732475  300017 out.go:99] [download-only-787419] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:29:43.732661  300017 notify.go:221] Checking for updates...
	I1227 09:29:43.735733  300017 out.go:171] MINIKUBE_LOCATION=22343
	I1227 09:29:43.738869  300017 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:29:43.741821  300017 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:29:43.744750  300017 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 09:29:43.747850  300017 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 09:29:43.753725  300017 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 09:29:43.753999  300017 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:29:43.783945  300017 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:29:43.784075  300017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:29:43.848761  300017 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 09:29:43.839221811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:29:43.848888  300017 docker.go:319] overlay module found
	I1227 09:29:43.851880  300017 out.go:99] Using the docker driver based on user configuration
	I1227 09:29:43.851925  300017 start.go:309] selected driver: docker
	I1227 09:29:43.851931  300017 start.go:928] validating driver "docker" against <nil>
	I1227 09:29:43.852074  300017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:29:43.908625  300017 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 09:29:43.900043839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:29:43.908772  300017 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:29:43.909033  300017 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 09:29:43.909181  300017 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:29:43.912299  300017 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-787419 host does not exist
	  To start a cluster, run: "minikube start -p download-only-787419"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-787419
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 09:29:49.102448  299811 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-963946 --alsologtostderr --binary-mirror http://127.0.0.1:44187 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-963946" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-963946
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-716851
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-716851: exit status 85 (75.795702ms)

                                                
                                                
-- stdout --
	* Profile "addons-716851" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-716851"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-716851
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-716851: exit status 85 (72.763186ms)

                                                
                                                
-- stdout --
	* Profile "addons-716851" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-716851"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (144.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-716851 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-716851 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m24.548587113s)
--- PASS: TestAddons/Setup (144.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-716851 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-716851 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.82s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-716851 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-716851 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0d69ad91-de78-4cf3-8220-55cead6ce80e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0d69ad91-de78-4cf3-8220-55cead6ce80e] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003612906s
addons_test.go:696: (dbg) Run:  kubectl --context addons-716851 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-716851 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-716851 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-716851 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-716851
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-716851: (12.126349667s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-716851
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-716851
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-716851
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestCertOptions (29.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-810217 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-810217 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.68490496s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-810217 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-810217 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-810217 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-810217" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-810217
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-810217: (2.045193439s)
--- PASS: TestCertOptions (29.46s)

                                                
                                    
x
+
TestCertExpiration (237.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-528820 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1227 10:16:42.757378  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-528820 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.741918249s)
E1227 10:17:15.338535  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-528820 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-528820 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (27.139795924s)
helpers_test.go:176: Cleaning up "cert-expiration-528820" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-528820
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-528820: (2.51071635s)
--- PASS: TestCertExpiration (237.40s)

                                                
                                    
x
+
TestErrorSpam/setup (26.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-972295 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-972295 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-972295 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-972295 --driver=docker  --container-runtime=crio: (26.627970819s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (26.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (6.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 pause: exit status 80 (1.73398797s)

                                                
                                                
-- stdout --
	* Pausing node nospam-972295 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:34:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 pause: exit status 80 (2.368431275s)

                                                
                                                
-- stdout --
	* Pausing node nospam-972295 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:34:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 pause: exit status 80 (2.469514761s)

                                                
                                                
-- stdout --
	* Pausing node nospam-972295 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:34:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 unpause: exit status 80 (1.347208215s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-972295 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:34:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 unpause: exit status 80 (1.513634318s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-972295 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:34:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 unpause: exit status 80 (1.685994925s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-972295 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-27T09:34:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.55s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 stop: (1.3160938s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-972295 --log_dir /tmp/nospam-972295 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22343-297941/.minikube/files/etc/test/nested/copy/299811/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-234952 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-234952 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (46.684360637s)
--- PASS: TestFunctional/serial/StartWithProxy (46.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 09:35:25.111156  299811 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-234952 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-234952 --alsologtostderr -v=8: (29.152480045s)
functional_test.go:678: soft start took 29.153061175s for "functional-234952" cluster.
I1227 09:35:54.263922  299811 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (29.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-234952 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-234952 cache add registry.k8s.io/pause:3.1: (1.173666954s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-234952 cache add registry.k8s.io/pause:3.3: (1.151777846s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-234952 cache add registry.k8s.io/pause:latest: (1.20419261s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-234952 /tmp/TestFunctionalserialCacheCmdcacheadd_local1028931247/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 cache add minikube-local-cache-test:functional-234952
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 cache delete minikube-local-cache-test:functional-234952
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-234952
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-234952 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (551.623069ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 kubectl -- --context functional-234952 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-234952 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-234952 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-234952 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.006201231s)
functional_test.go:776: restart took 31.006291439s for "functional-234952" cluster.
I1227 09:36:33.094111  299811 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (31.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-234952 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-234952 logs: (1.441630194s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 logs --file /tmp/TestFunctionalserialLogsFileCmd2380872224/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-234952 logs --file /tmp/TestFunctionalserialLogsFileCmd2380872224/001/logs.txt: (1.486750774s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-234952 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-234952
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-234952: exit status 115 (392.106438ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32595 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-234952 delete -f testdata/invalidsvc.yaml
functional_test.go:2337: (dbg) Done: kubectl --context functional-234952 delete -f testdata/invalidsvc.yaml: (1.093236361s)
--- PASS: TestFunctional/serial/InvalidService (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-234952 config get cpus: exit status 14 (101.971009ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-234952 config get cpus: exit status 14 (74.807014ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-234952 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-234952 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 323775: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-234952 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-234952 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.236975ms)

                                                
                                                
-- stdout --
	* [functional-234952] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:37:10.905297  323438 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:10.905463  323438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:10.905475  323438 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:10.905481  323438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:10.905769  323438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:37:10.906142  323438 out.go:368] Setting JSON to false
	I1227 09:37:10.907041  323438 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4784,"bootTime":1766823447,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:37:10.907121  323438 start.go:143] virtualization:  
	I1227 09:37:10.910294  323438 out.go:179] * [functional-234952] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:37:10.913959  323438 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:37:10.914093  323438 notify.go:221] Checking for updates...
	I1227 09:37:10.922043  323438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:37:10.924980  323438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:37:10.929164  323438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 09:37:10.932216  323438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:37:10.935933  323438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:37:10.939478  323438 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:10.940090  323438 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:37:10.969765  323438 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:37:10.969889  323438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:11.034178  323438 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:37:11.024687395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:37:11.034285  323438 docker.go:319] overlay module found
	I1227 09:37:11.037776  323438 out.go:179] * Using the docker driver based on existing profile
	I1227 09:37:11.040668  323438 start.go:309] selected driver: docker
	I1227 09:37:11.040693  323438 start.go:928] validating driver "docker" against &{Name:functional-234952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-234952 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:11.040799  323438 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:37:11.044219  323438 out.go:203] 
	W1227 09:37:11.047235  323438 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 09:37:11.049982  323438 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-234952 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-234952 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-234952 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (205.749521ms)

                                                
                                                
-- stdout --
	* [functional-234952] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:37:10.710253  323392 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:10.710391  323392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:10.710411  323392 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:10.710417  323392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:10.710801  323392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:37:10.711199  323392 out.go:368] Setting JSON to false
	I1227 09:37:10.712149  323392 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4784,"bootTime":1766823447,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:37:10.712233  323392 start.go:143] virtualization:  
	I1227 09:37:10.718094  323392 out.go:179] * [functional-234952] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1227 09:37:10.721298  323392 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:37:10.721361  323392 notify.go:221] Checking for updates...
	I1227 09:37:10.727299  323392 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:37:10.730973  323392 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 09:37:10.733874  323392 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 09:37:10.736772  323392 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:37:10.739873  323392 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:37:10.743341  323392 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:37:10.744088  323392 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:37:10.774914  323392 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:37:10.775039  323392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:37:10.836311  323392 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:37:10.826551289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:37:10.836445  323392 docker.go:319] overlay module found
	I1227 09:37:10.839554  323392 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1227 09:37:10.842427  323392 start.go:309] selected driver: docker
	I1227 09:37:10.842452  323392 start.go:928] validating driver "docker" against &{Name:functional-234952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-234952 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:37:10.842593  323392 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:37:10.846249  323392 out.go:203] 
	W1227 09:37:10.849207  323392 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 09:37:10.852099  323392 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-234952 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-234952 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-4rjc5" [e5707c41-613d-445f-ba0b-773b530a15a7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-4rjc5" [e5707c41-613d-445f-ba0b-773b530a15a7] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003539097s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:32053
functional_test.go:1685: http://192.168.49.2:32053: success! body:
Request served by hello-node-connect-5d95464fd4-4rjc5

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32053
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [5df87f79-9992-4224-9d43-c5675c1d342a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003188393s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-234952 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-234952 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-234952 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-234952 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8210f3ce-7ae0-430d-8463-4119ef32ccfd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [8210f3ce-7ae0-430d-8463-4119ef32ccfd] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002962138s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-234952 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-234952 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-234952 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [74fd57c0-d7e1-4ee6-a3fe-9e07670a8980] Pending
helpers_test.go:353: "sp-pod" [74fd57c0-d7e1-4ee6-a3fe-9e07670a8980] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003927463s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-234952 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh -n functional-234952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 cp functional-234952:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3270873362/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh -n functional-234952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh -n functional-234952 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/299811/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo cat /etc/test/nested/copy/299811/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/299811.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo cat /etc/ssl/certs/299811.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/299811.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo cat /usr/share/ca-certificates/299811.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo cat /etc/ssl/certs/51391683.0"
E1227 09:37:25.580138  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/2998112.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo cat /etc/ssl/certs/2998112.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/2998112.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo cat /usr/share/ca-certificates/2998112.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-234952 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-234952 ssh "sudo systemctl is-active docker": exit status 1 (370.149679ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-234952 ssh "sudo systemctl is-active containerd": exit status 1 (361.457187ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-234952 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-234952 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-234952 tunnel --alsologtostderr] ...
helpers_test.go:520: unable to terminate pid 321448: os: process already finished
helpers_test.go:526: unable to kill pid 321249: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-234952 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-234952 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-234952 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [9d67caf3-108d-4c58-ba72-22080baf3cef] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [9d67caf3-108d-4c58-ba72-22080baf3cef] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003568883s
I1227 09:36:52.186651  299811 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-234952 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.108.73 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-234952 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-234952 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-234952 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-xr9fk" [f2961737-d702-4ab7-bcd5-e6a02e29e6f1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-xr9fk" [f2961737-d702-4ab7-bcd5-e6a02e29e6f1] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.006200757s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "361.625408ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "59.040605ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "370.73088ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "54.274111ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdany-port2557210353/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766828225525468102" to /tmp/TestFunctionalparallelMountCmdany-port2557210353/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766828225525468102" to /tmp/TestFunctionalparallelMountCmdany-port2557210353/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766828225525468102" to /tmp/TestFunctionalparallelMountCmdany-port2557210353/001/test-1766828225525468102
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.303749ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 09:37:05.880085  299811 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 09:37 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 09:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 09:37 test-1766828225525468102
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh cat /mount-9p/test-1766828225525468102
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-234952 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [7145eaa0-0912-4eac-9ccc-0671edfb7d58] Pending
helpers_test.go:353: "busybox-mount" [7145eaa0-0912-4eac-9ccc-0671edfb7d58] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [7145eaa0-0912-4eac-9ccc-0671edfb7d58] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [7145eaa0-0912-4eac-9ccc-0671edfb7d58] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003642735s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-234952 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdany-port2557210353/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 service list -o json
functional_test.go:1509: Took "528.002816ms" to run "out/minikube-linux-arm64 -p functional-234952 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30858
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30858
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdspecific-port1503996231/001:/mount-9p --alsologtostderr -v=1 --port 44429]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (594.697297ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 09:37:14.317202  299811 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdspecific-port1503996231/001:/mount-9p --alsologtostderr -v=1 --port 44429] ...
E1227 09:37:15.338710  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:37:15.343930  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:37:15.355016  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:37:15.375309  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:37:15.415962  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "sudo umount -f /mount-9p"
E1227 09:37:15.496736  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:37:15.657091  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-234952 ssh "sudo umount -f /mount-9p": exit status 1 (375.580305ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-234952 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdspecific-port1503996231/001:/mount-9p --alsologtostderr -v=1 --port 44429] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1187717237/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1187717237/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1187717237/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T" /mount1
E1227 09:37:15.977901  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:37:16.618368  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T" /mount1: exit status 1 (1.008623553s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T" /mount2
E1227 09:37:17.898541  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-234952 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1187717237/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1187717237/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-234952 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1187717237/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-234952 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-234952
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-234952 image ls --format short --alsologtostderr:
I1227 09:37:27.308734  326282 out.go:360] Setting OutFile to fd 1 ...
I1227 09:37:27.308868  326282 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:37:27.308879  326282 out.go:374] Setting ErrFile to fd 2...
I1227 09:37:27.308884  326282 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:37:27.309127  326282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
I1227 09:37:27.309746  326282 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:37:27.309869  326282 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:37:27.310396  326282 cli_runner.go:164] Run: docker container inspect functional-234952 --format={{.State.Status}}
I1227 09:37:27.333913  326282 ssh_runner.go:195] Run: systemctl --version
I1227 09:37:27.333977  326282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-234952
I1227 09:37:27.364260  326282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/functional-234952/id_rsa Username:docker}
I1227 09:37:27.471976  326282 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-234952 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ c3fcf259c473a │ 85MB   │
│ registry.k8s.io/pause                             │ 3.3                                   │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                             │ latest                                │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ ddc8422d4d35a │ 49.8MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ d7b100cd9a77b │ 520kB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-234952                     │ ce2d2cda2d858 │ 4.79MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test               │ functional-234952                     │ 1076557dac249 │ 3.33kB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 962dbbc0e55ec │ 55.1MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 271e49a0ebc56 │ 60.9MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 88898f1d1a62a │ 72.2MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ b1a8c6f707935 │ 111MB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ c96ee3c174987 │ 108MB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ de369f46c2ff5 │ 74.1MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-234952 image ls --format table --alsologtostderr:
I1227 09:37:28.097765  326504 out.go:360] Setting OutFile to fd 1 ...
I1227 09:37:28.097883  326504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:37:28.097895  326504 out.go:374] Setting ErrFile to fd 2...
I1227 09:37:28.097901  326504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:37:28.098141  326504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
I1227 09:37:28.098749  326504 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:37:28.098863  326504 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:37:28.099356  326504 cli_runner.go:164] Run: docker container inspect functional-234952 --format={{.State.Status}}
I1227 09:37:28.120216  326504 ssh_runner.go:195] Run: systemctl --version
I1227 09:37:28.120275  326504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-234952
I1227 09:37:28.138899  326504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/functional-234952/id_rsa Username:docker}
I1227 09:37:28.238778  326504 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-234952 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:7cf0c9cc3c6b7ce30b46fa0fe53d95bee9d7803900edb965d3995ddf9ae12d03","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077764"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoT
ags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"108362109"},{"id":"1076557dac249e8f8c372f9111275a295b7740a647ab526d7986fe059da4c018","repoDigests":["localhost/minikube-local-cache-test@sha256:64597c0e0e3bb38db9ec710a60c503ce212b8b94cf7c413309624c72b59b0c6e"],"repoTags":["localhost/minikube-local-cache-test:functional-234952"],"size":"3330"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3"]
,"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"49822549"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"74106775"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/ec
ho-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4789170"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890","registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"60850387"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/paus
e:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e
1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"85015535"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503","registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"72170321"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-234952 image ls --format json --alsologtostderr:
I1227 09:37:27.853708  326443 out.go:360] Setting OutFile to fd 1 ...
I1227 09:37:27.853817  326443 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:37:27.853822  326443 out.go:374] Setting ErrFile to fd 2...
I1227 09:37:27.853826  326443 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:37:27.854079  326443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
I1227 09:37:27.854681  326443 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:37:27.854791  326443 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:37:27.855733  326443 cli_runner.go:164] Run: docker container inspect functional-234952 --format={{.State.Status}}
I1227 09:37:27.881343  326443 ssh_runner.go:195] Run: systemctl --version
I1227 09:37:27.881398  326443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-234952
I1227 09:37:27.908143  326443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/functional-234952/id_rsa Username:docker}
I1227 09:37:28.013770  326443 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-234952 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4789170"
- id: 962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:7cf0c9cc3c6b7ce30b46fa0fe53d95bee9d7803900edb965d3995ddf9ae12d03
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077764"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
- registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "60850387"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "85015535"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "74106775"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "49822549"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1076557dac249e8f8c372f9111275a295b7740a647ab526d7986fe059da4c018
repoDigests:
- localhost/minikube-local-cache-test@sha256:64597c0e0e3bb38db9ec710a60c503ce212b8b94cf7c413309624c72b59b0c6e
repoTags:
- localhost/minikube-local-cache-test:functional-234952
size: "3330"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "72170321"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "108362109"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-234952 image ls --format yaml --alsologtostderr:
I1227 09:37:27.584804  326371 out.go:360] Setting OutFile to fd 1 ...
I1227 09:37:27.584988  326371 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:37:27.585017  326371 out.go:374] Setting ErrFile to fd 2...
I1227 09:37:27.585036  326371 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:37:27.585359  326371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
I1227 09:37:27.586222  326371 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:37:27.586401  326371 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:37:27.586961  326371 cli_runner.go:164] Run: docker container inspect functional-234952 --format={{.State.Status}}
I1227 09:37:27.610852  326371 ssh_runner.go:195] Run: systemctl --version
I1227 09:37:27.610913  326371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-234952
I1227 09:37:27.631957  326371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/functional-234952/id_rsa Username:docker}
I1227 09:37:27.737582  326371 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-234952 ssh pgrep buildkitd: exit status 1 (367.895507ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image build -t localhost/my-image:functional-234952 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-234952 image build -t localhost/my-image:functional-234952 testdata/build --alsologtostderr: (3.389955238s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-234952 image build -t localhost/my-image:functional-234952 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 04a062a5e8e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-234952
--> 8a7a55581a7
Successfully tagged localhost/my-image:functional-234952
8a7a55581a711d9bebc46e2ec5ead8f984ee2d958c0492a3c1c2b2afb0d4604f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-234952 image build -t localhost/my-image:functional-234952 testdata/build --alsologtostderr:
I1227 09:37:27.681651  326398 out.go:360] Setting OutFile to fd 1 ...
I1227 09:37:27.682564  326398 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:37:27.682610  326398 out.go:374] Setting ErrFile to fd 2...
I1227 09:37:27.682631  326398 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:37:27.683157  326398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
I1227 09:37:27.683879  326398 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:37:27.684682  326398 config.go:182] Loaded profile config "functional-234952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I1227 09:37:27.685304  326398 cli_runner.go:164] Run: docker container inspect functional-234952 --format={{.State.Status}}
I1227 09:37:27.702954  326398 ssh_runner.go:195] Run: systemctl --version
I1227 09:37:27.703004  326398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-234952
I1227 09:37:27.720581  326398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/functional-234952/id_rsa Username:docker}
I1227 09:37:27.842670  326398 build_images.go:162] Building image from path: /tmp/build.3165450901.tar
I1227 09:37:27.842756  326398 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 09:37:27.852052  326398 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3165450901.tar
I1227 09:37:27.855736  326398 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3165450901.tar: stat -c "%s %y" /var/lib/minikube/build/build.3165450901.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3165450901.tar': No such file or directory
I1227 09:37:27.855758  326398 ssh_runner.go:362] scp /tmp/build.3165450901.tar --> /var/lib/minikube/build/build.3165450901.tar (3072 bytes)
I1227 09:37:27.881342  326398 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3165450901
I1227 09:37:27.890057  326398 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3165450901 -xf /var/lib/minikube/build/build.3165450901.tar
I1227 09:37:27.898461  326398 crio.go:315] Building image: /var/lib/minikube/build/build.3165450901
I1227 09:37:27.898530  326398 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-234952 /var/lib/minikube/build/build.3165450901 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1227 09:37:30.984356  326398 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-234952 /var/lib/minikube/build/build.3165450901 --cgroup-manager=cgroupfs: (3.08580311s)
I1227 09:37:30.984435  326398 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3165450901
I1227 09:37:30.992240  326398 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3165450901.tar
I1227 09:37:31.000448  326398 build_images.go:218] Built localhost/my-image:functional-234952 from /tmp/build.3165450901.tar
I1227 09:37:31.000483  326398 build_images.go:134] succeeded building to: functional-234952
I1227 09:37:31.000489  326398 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952 --alsologtostderr
E1227 09:37:20.458943  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-234952 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952 --alsologtostderr: (1.278419877s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
2025/12/27 09:37:24 [DEBUG] GET http://127.0.0.1:35209/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-234952 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-234952
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-234952
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-234952
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (161.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1227 09:37:35.820696  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:37:56.300941  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:38:37.261205  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:39:59.181460  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m40.4744944s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (161.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (37.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 kubectl -- rollout status deployment/busybox: (34.625518222s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-m4cdd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-m8vsx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-t7vgw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-m4cdd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-m8vsx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-t7vgw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-m4cdd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-m8vsx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-t7vgw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (37.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-m4cdd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-m4cdd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-m8vsx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-m8vsx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-t7vgw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 kubectl -- exec busybox-769dd8b7dd-t7vgw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 node add --alsologtostderr -v 5: (30.669417194s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 status --alsologtostderr -v 5: (1.044493206s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-513251 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.084597314s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 status --output json --alsologtostderr -v 5: (1.037751599s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp testdata/cp-test.txt ha-513251:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4265014863/001/cp-test_ha-513251.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251:/home/docker/cp-test.txt ha-513251-m02:/home/docker/cp-test_ha-513251_ha-513251-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m02 "sudo cat /home/docker/cp-test_ha-513251_ha-513251-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251:/home/docker/cp-test.txt ha-513251-m03:/home/docker/cp-test_ha-513251_ha-513251-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m03 "sudo cat /home/docker/cp-test_ha-513251_ha-513251-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251:/home/docker/cp-test.txt ha-513251-m04:/home/docker/cp-test_ha-513251_ha-513251-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m04 "sudo cat /home/docker/cp-test_ha-513251_ha-513251-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp testdata/cp-test.txt ha-513251-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4265014863/001/cp-test_ha-513251-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m02:/home/docker/cp-test.txt ha-513251:/home/docker/cp-test_ha-513251-m02_ha-513251.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251 "sudo cat /home/docker/cp-test_ha-513251-m02_ha-513251.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m02:/home/docker/cp-test.txt ha-513251-m03:/home/docker/cp-test_ha-513251-m02_ha-513251-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m03 "sudo cat /home/docker/cp-test_ha-513251-m02_ha-513251-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m02:/home/docker/cp-test.txt ha-513251-m04:/home/docker/cp-test_ha-513251-m02_ha-513251-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m04 "sudo cat /home/docker/cp-test_ha-513251-m02_ha-513251-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp testdata/cp-test.txt ha-513251-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4265014863/001/cp-test_ha-513251-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m03:/home/docker/cp-test.txt ha-513251:/home/docker/cp-test_ha-513251-m03_ha-513251.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251 "sudo cat /home/docker/cp-test_ha-513251-m03_ha-513251.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m03:/home/docker/cp-test.txt ha-513251-m02:/home/docker/cp-test_ha-513251-m03_ha-513251-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m02 "sudo cat /home/docker/cp-test_ha-513251-m03_ha-513251-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m03:/home/docker/cp-test.txt ha-513251-m04:/home/docker/cp-test_ha-513251-m03_ha-513251-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m04 "sudo cat /home/docker/cp-test_ha-513251-m03_ha-513251-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp testdata/cp-test.txt ha-513251-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m04 "sudo cat /home/docker/cp-test.txt"
E1227 09:41:42.757189  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:41:42.762424  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:41:42.772670  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:41:42.792986  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:41:42.833530  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:41:42.913846  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:41:43.074218  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4265014863/001/cp-test_ha-513251-m04.txt
E1227 09:41:43.395108  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251:/home/docker/cp-test_ha-513251-m04_ha-513251.txt
E1227 09:41:44.036229  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251 "sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251-m02:/home/docker/cp-test_ha-513251-m04_ha-513251-m02.txt
E1227 09:41:45.317684  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m02 "sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 cp ha-513251-m04:/home/docker/cp-test.txt ha-513251-m03:/home/docker/cp-test_ha-513251-m04_ha-513251-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 ssh -n ha-513251-m03 "sudo cat /home/docker/cp-test_ha-513251-m04_ha-513251-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 node stop m02 --alsologtostderr -v 5
E1227 09:41:47.878841  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:41:52.999553  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 node stop m02 --alsologtostderr -v 5: (12.122547284s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-513251 status --alsologtostderr -v 5: exit status 7 (1.136651238s)

                                                
                                                
-- stdout --
	ha-513251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-513251-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-513251-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-513251-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:41:59.492883  341457 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:41:59.493018  341457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:41:59.493030  341457 out.go:374] Setting ErrFile to fd 2...
	I1227 09:41:59.493036  341457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:41:59.493303  341457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:41:59.493542  341457 out.go:368] Setting JSON to false
	I1227 09:41:59.493581  341457 mustload.go:66] Loading cluster: ha-513251
	I1227 09:41:59.493655  341457 notify.go:221] Checking for updates...
	I1227 09:41:59.494742  341457 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:41:59.494768  341457 status.go:174] checking status of ha-513251 ...
	I1227 09:41:59.495278  341457 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:41:59.513340  341457 status.go:371] ha-513251 host status = "Running" (err=<nil>)
	I1227 09:41:59.513362  341457 host.go:66] Checking if "ha-513251" exists ...
	I1227 09:41:59.513747  341457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251
	I1227 09:41:59.539623  341457 host.go:66] Checking if "ha-513251" exists ...
	I1227 09:41:59.539937  341457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:41:59.540024  341457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251
	I1227 09:41:59.561655  341457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251/id_rsa Username:docker}
	I1227 09:41:59.665805  341457 ssh_runner.go:195] Run: systemctl --version
	I1227 09:41:59.672123  341457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:41:59.686292  341457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:41:59.755133  341457 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-27 09:41:59.742788366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:41:59.756420  341457 kubeconfig.go:125] found "ha-513251" server: "https://192.168.49.254:8443"
	I1227 09:41:59.756457  341457 api_server.go:166] Checking apiserver status ...
	I1227 09:41:59.756517  341457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:41:59.768259  341457 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	I1227 09:41:59.776900  341457 api_server.go:192] apiserver freezer: "7:freezer:/docker/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/crio/crio-fe553b52628f8addbe6b97454ab6975aa032613697f56ef6353286ddeafd3daf"
	I1227 09:41:59.776965  341457 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bb5d0cc0ca44506f8d7080f95a3d22859783bac85870a83a386fe347d2175d13/crio/crio-fe553b52628f8addbe6b97454ab6975aa032613697f56ef6353286ddeafd3daf/freezer.state
	I1227 09:41:59.784176  341457 api_server.go:214] freezer state: "THAWED"
	I1227 09:41:59.784209  341457 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 09:41:59.793183  341457 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 09:41:59.793211  341457 status.go:463] ha-513251 apiserver status = Running (err=<nil>)
	I1227 09:41:59.793222  341457 status.go:176] ha-513251 status: &{Name:ha-513251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:41:59.793239  341457 status.go:174] checking status of ha-513251-m02 ...
	I1227 09:41:59.793559  341457 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:41:59.810976  341457 status.go:371] ha-513251-m02 host status = "Stopped" (err=<nil>)
	I1227 09:41:59.811001  341457 status.go:384] host is not running, skipping remaining checks
	I1227 09:41:59.811008  341457 status.go:176] ha-513251-m02 status: &{Name:ha-513251-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:41:59.811030  341457 status.go:174] checking status of ha-513251-m03 ...
	I1227 09:41:59.811358  341457 cli_runner.go:164] Run: docker container inspect ha-513251-m03 --format={{.State.Status}}
	I1227 09:41:59.828877  341457 status.go:371] ha-513251-m03 host status = "Running" (err=<nil>)
	I1227 09:41:59.828904  341457 host.go:66] Checking if "ha-513251-m03" exists ...
	I1227 09:41:59.829201  341457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m03
	I1227 09:41:59.848163  341457 host.go:66] Checking if "ha-513251-m03" exists ...
	I1227 09:41:59.848678  341457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:41:59.848737  341457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m03
	I1227 09:41:59.867688  341457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m03/id_rsa Username:docker}
	I1227 09:41:59.966211  341457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:41:59.980621  341457 kubeconfig.go:125] found "ha-513251" server: "https://192.168.49.254:8443"
	I1227 09:41:59.980652  341457 api_server.go:166] Checking apiserver status ...
	I1227 09:41:59.980707  341457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:41:59.992427  341457 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1181/cgroup
	I1227 09:42:00.036186  341457 api_server.go:192] apiserver freezer: "7:freezer:/docker/179a4693d589039e53f927ad1b214dfaa4b8c0a3ac88d7daced307cab06be55c/crio/crio-31889485e92572aa010610f056f5be0e42c42c5f79ce0ae70db1a8ee6b486a51"
	I1227 09:42:00.074821  341457 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/179a4693d589039e53f927ad1b214dfaa4b8c0a3ac88d7daced307cab06be55c/crio/crio-31889485e92572aa010610f056f5be0e42c42c5f79ce0ae70db1a8ee6b486a51/freezer.state
	I1227 09:42:00.135577  341457 api_server.go:214] freezer state: "THAWED"
	I1227 09:42:00.135709  341457 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 09:42:00.172144  341457 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 09:42:00.172181  341457 status.go:463] ha-513251-m03 apiserver status = Running (err=<nil>)
	I1227 09:42:00.172191  341457 status.go:176] ha-513251-m03 status: &{Name:ha-513251-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:42:00.188283  341457 status.go:174] checking status of ha-513251-m04 ...
	I1227 09:42:00.188826  341457 cli_runner.go:164] Run: docker container inspect ha-513251-m04 --format={{.State.Status}}
	I1227 09:42:00.257279  341457 status.go:371] ha-513251-m04 host status = "Running" (err=<nil>)
	I1227 09:42:00.257310  341457 host.go:66] Checking if "ha-513251-m04" exists ...
	I1227 09:42:00.257665  341457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-513251-m04
	I1227 09:42:00.321176  341457 host.go:66] Checking if "ha-513251-m04" exists ...
	I1227 09:42:00.321531  341457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:42:00.321575  341457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-513251-m04
	I1227 09:42:00.372198  341457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/ha-513251-m04/id_rsa Username:docker}
	I1227 09:42:00.526233  341457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:42:00.559897  341457 status.go:176] ha-513251-m04 status: &{Name:ha-513251-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 node start m02 --alsologtostderr -v 5
E1227 09:42:03.239798  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:42:15.338404  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 node start m02 --alsologtostderr -v 5: (20.236274601s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 status --alsologtostderr -v 5: (1.196583407s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1227 09:42:23.720877  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.231463956s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 stop --alsologtostderr -v 5
E1227 09:42:43.022620  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 stop --alsologtostderr -v 5: (37.699900498s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 start --wait true --alsologtostderr -v 5
E1227 09:43:04.681172  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:44:26.601432  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 start --wait true --alsologtostderr -v 5: (1m26.48830989s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 node delete m03 --alsologtostderr -v 5: (11.103357585s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-513251 stop --alsologtostderr -v 5: (36.039049116s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-513251 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-513251 status --alsologtostderr -v 5: exit status 7 (111.558526ms)

                                                
                                                
-- stdout --
	ha-513251
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-513251-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-513251-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:45:17.660711  353656 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:45:17.660909  353656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.660938  353656 out.go:374] Setting ErrFile to fd 2...
	I1227 09:45:17.660958  353656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:45:17.661410  353656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 09:45:17.661706  353656 out.go:368] Setting JSON to false
	I1227 09:45:17.661761  353656 mustload.go:66] Loading cluster: ha-513251
	I1227 09:45:17.662978  353656 config.go:182] Loaded profile config "ha-513251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 09:45:17.663036  353656 status.go:174] checking status of ha-513251 ...
	I1227 09:45:17.663041  353656 notify.go:221] Checking for updates...
	I1227 09:45:17.663646  353656 cli_runner.go:164] Run: docker container inspect ha-513251 --format={{.State.Status}}
	I1227 09:45:17.684284  353656 status.go:371] ha-513251 host status = "Stopped" (err=<nil>)
	I1227 09:45:17.684312  353656 status.go:384] host is not running, skipping remaining checks
	I1227 09:45:17.684320  353656 status.go:176] ha-513251 status: &{Name:ha-513251 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:45:17.684344  353656 status.go:174] checking status of ha-513251-m02 ...
	I1227 09:45:17.684663  353656 cli_runner.go:164] Run: docker container inspect ha-513251-m02 --format={{.State.Status}}
	I1227 09:45:17.707732  353656 status.go:371] ha-513251-m02 host status = "Stopped" (err=<nil>)
	I1227 09:45:17.707757  353656 status.go:384] host is not running, skipping remaining checks
	I1227 09:45:17.707764  353656 status.go:176] ha-513251-m02 status: &{Name:ha-513251-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:45:17.707782  353656 status.go:174] checking status of ha-513251-m04 ...
	I1227 09:45:17.708086  353656 cli_runner.go:164] Run: docker container inspect ha-513251-m04 --format={{.State.Status}}
	I1227 09:45:17.726508  353656 status.go:371] ha-513251-m04 host status = "Stopped" (err=<nil>)
	I1227 09:45:17.726533  353656 status.go:384] host is not running, skipping remaining checks
	I1227 09:45:17.726541  353656 status.go:176] ha-513251-m04 status: &{Name:ha-513251-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-287683 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-287683 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (47.780789949s)
--- PASS: TestJSONOutput/start/Command (47.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-287683 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-287683 --output=json --user=testUser: (5.844866152s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-589922 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-589922 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (100.224375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dd3c82ba-b852-4d4d-a081-9aa1750f2dda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-589922] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8c93ee2-5aa6-4844-9ffe-0a3ffa440bdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22343"}}
	{"specversion":"1.0","id":"b461bbb9-24c4-4b6f-a2ba-eaec2323a439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a0845725-f082-4938-9406-64cfd0c20f75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig"}}
	{"specversion":"1.0","id":"a8539424-c8a9-4411-9de2-7961ac9955f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube"}}
	{"specversion":"1.0","id":"bed87c30-9858-4c61-8e50-bb2a0de7763d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5caf96ac-798a-444b-b886-8760c904b6e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a3082617-18e4-4c54-9a4a-c0ae85fef5c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-589922" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-589922
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-405683 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-405683 --network=: (33.560121516s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-405683" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-405683
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-405683: (2.344594572s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-209430 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-209430 --network=bridge: (28.078912724s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-209430" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-209430
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-209430: (2.119491557s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.25s)

                                                
                                    
x
+
TestKicExistingNetwork (31.89s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1227 09:55:47.442857  299811 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 09:55:47.463757  299811 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 09:55:47.464714  299811 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1227 09:55:47.464754  299811 cli_runner.go:164] Run: docker network inspect existing-network
W1227 09:55:47.480673  299811 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1227 09:55:47.480706  299811 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1227 09:55:47.480723  299811 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1227 09:55:47.480825  299811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:55:47.498332  299811 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1b4d8553c414 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:24:77:63:8b:1f} reservation:<nil>}
I1227 09:55:47.498730  299811 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001edaf20}
I1227 09:55:47.498765  299811 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1227 09:55:47.498817  299811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1227 09:55:47.570553  299811 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-580482 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-580482 --network=existing-network: (29.590357165s)
helpers_test.go:176: Cleaning up "existing-network-580482" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-580482
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-580482: (2.141982837s)
I1227 09:56:19.319645  299811 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.89s)

                                                
                                    
x
+
TestKicCustomSubnet (30.51s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-184140 --subnet=192.168.60.0/24
E1227 09:56:42.760142  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-184140 --subnet=192.168.60.0/24: (28.237276627s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-184140 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-184140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-184140
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-184140: (2.238549113s)
--- PASS: TestKicCustomSubnet (30.51s)

                                                
                                    
x
+
TestKicStaticIP (32.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-034986 --static-ip=192.168.200.200
E1227 09:57:15.339243  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-034986 --static-ip=192.168.200.200: (29.776615488s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-034986 ip
helpers_test.go:176: Cleaning up "static-ip-034986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-034986
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-034986: (2.152316838s)
--- PASS: TestKicStaticIP (32.10s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (59.78s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-450306 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-450306 --driver=docker  --container-runtime=crio: (25.297199861s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-453112 --driver=docker  --container-runtime=crio
E1227 09:58:05.804476  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-453112 --driver=docker  --container-runtime=crio: (28.483301366s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-450306
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-453112
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-453112" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-453112
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-453112: (2.117573885s)
helpers_test.go:176: Cleaning up "first-450306" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-450306
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-450306: (2.398635615s)
--- PASS: TestMinikubeProfile (59.78s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-997606 --memory=3072 --mount-string /tmp/TestMountStartserial2698058401/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-997606 --memory=3072 --mount-string /tmp/TestMountStartserial2698058401/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.039606225s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-997606 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-999353 --memory=3072 --mount-string /tmp/TestMountStartserial2698058401/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-999353 --memory=3072 --mount-string /tmp/TestMountStartserial2698058401/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.296129525s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-999353 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-997606 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-997606 --alsologtostderr -v=5: (1.706446731s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-999353 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-999353
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-999353: (1.290286451s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-999353
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-999353: (7.021460138s)
--- PASS: TestMountStart/serial/RestartStopped (8.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-999353 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-823603 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-823603 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m14.361279724s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-823603 -- rollout status deployment/busybox: (3.583678281s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- exec busybox-769dd8b7dd-ksvx2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- exec busybox-769dd8b7dd-lzgnf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- exec busybox-769dd8b7dd-ksvx2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- exec busybox-769dd8b7dd-lzgnf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- exec busybox-769dd8b7dd-ksvx2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- exec busybox-769dd8b7dd-lzgnf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- exec busybox-769dd8b7dd-ksvx2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- exec busybox-769dd8b7dd-ksvx2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- exec busybox-769dd8b7dd-lzgnf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-823603 -- exec busybox-769dd8b7dd-lzgnf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-823603 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-823603 -v=5 --alsologtostderr: (27.871258207s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-823603 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp testdata/cp-test.txt multinode-823603:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp multinode-823603:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile368338869/001/cp-test_multinode-823603.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp multinode-823603:/home/docker/cp-test.txt multinode-823603-m02:/home/docker/cp-test_multinode-823603_multinode-823603-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m02 "sudo cat /home/docker/cp-test_multinode-823603_multinode-823603-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp multinode-823603:/home/docker/cp-test.txt multinode-823603-m03:/home/docker/cp-test_multinode-823603_multinode-823603-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m03 "sudo cat /home/docker/cp-test_multinode-823603_multinode-823603-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp testdata/cp-test.txt multinode-823603-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp multinode-823603-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile368338869/001/cp-test_multinode-823603-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp multinode-823603-m02:/home/docker/cp-test.txt multinode-823603:/home/docker/cp-test_multinode-823603-m02_multinode-823603.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603 "sudo cat /home/docker/cp-test_multinode-823603-m02_multinode-823603.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp multinode-823603-m02:/home/docker/cp-test.txt multinode-823603-m03:/home/docker/cp-test_multinode-823603-m02_multinode-823603-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m03 "sudo cat /home/docker/cp-test_multinode-823603-m02_multinode-823603-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp testdata/cp-test.txt multinode-823603-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp multinode-823603-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile368338869/001/cp-test_multinode-823603-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp multinode-823603-m03:/home/docker/cp-test.txt multinode-823603:/home/docker/cp-test_multinode-823603-m03_multinode-823603.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603 "sudo cat /home/docker/cp-test_multinode-823603-m03_multinode-823603.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 cp multinode-823603-m03:/home/docker/cp-test.txt multinode-823603-m02:/home/docker/cp-test_multinode-823603-m03_multinode-823603-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 ssh -n multinode-823603-m02 "sudo cat /home/docker/cp-test_multinode-823603-m03_multinode-823603-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-823603 node stop m03: (1.325218264s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-823603 status: exit status 7 (529.009999ms)

                                                
                                                
-- stdout --
	multinode-823603
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-823603-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-823603-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-823603 status --alsologtostderr: exit status 7 (548.663824ms)

                                                
                                                
-- stdout --
	multinode-823603
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-823603-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-823603-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:00:54.432102  401240 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:00:54.432277  401240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:00:54.432288  401240 out.go:374] Setting ErrFile to fd 2...
	I1227 10:00:54.432294  401240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:00:54.432547  401240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:00:54.432747  401240 out.go:368] Setting JSON to false
	I1227 10:00:54.432777  401240 mustload.go:66] Loading cluster: multinode-823603
	I1227 10:00:54.432848  401240 notify.go:221] Checking for updates...
	I1227 10:00:54.433197  401240 config.go:182] Loaded profile config "multinode-823603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:00:54.433215  401240 status.go:174] checking status of multinode-823603 ...
	I1227 10:00:54.433738  401240 cli_runner.go:164] Run: docker container inspect multinode-823603 --format={{.State.Status}}
	I1227 10:00:54.453697  401240 status.go:371] multinode-823603 host status = "Running" (err=<nil>)
	I1227 10:00:54.453728  401240 host.go:66] Checking if "multinode-823603" exists ...
	I1227 10:00:54.454061  401240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-823603
	I1227 10:00:54.483826  401240 host.go:66] Checking if "multinode-823603" exists ...
	I1227 10:00:54.484360  401240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:00:54.484411  401240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-823603
	I1227 10:00:54.503704  401240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33263 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/multinode-823603/id_rsa Username:docker}
	I1227 10:00:54.602199  401240 ssh_runner.go:195] Run: systemctl --version
	I1227 10:00:54.609182  401240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:00:54.624174  401240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:00:54.695224  401240 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 10:00:54.685331274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:00:54.695839  401240 kubeconfig.go:125] found "multinode-823603" server: "https://192.168.67.2:8443"
	I1227 10:00:54.695883  401240 api_server.go:166] Checking apiserver status ...
	I1227 10:00:54.695933  401240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:00:54.708712  401240 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup
	I1227 10:00:54.717276  401240 api_server.go:192] apiserver freezer: "7:freezer:/docker/c52862ce4f55cc55871e570c0504a151b0a757bf6d2f2e218fd0f46fa2e8cc0e/crio/crio-8202aa0e80f155f674de07626f938f6aa1e9c38a1eaccc50cd989e8e777e7348"
	I1227 10:00:54.717348  401240 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c52862ce4f55cc55871e570c0504a151b0a757bf6d2f2e218fd0f46fa2e8cc0e/crio/crio-8202aa0e80f155f674de07626f938f6aa1e9c38a1eaccc50cd989e8e777e7348/freezer.state
	I1227 10:00:54.726536  401240 api_server.go:214] freezer state: "THAWED"
	I1227 10:00:54.726579  401240 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1227 10:00:54.734837  401240 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1227 10:00:54.734866  401240 status.go:463] multinode-823603 apiserver status = Running (err=<nil>)
	I1227 10:00:54.734878  401240 status.go:176] multinode-823603 status: &{Name:multinode-823603 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 10:00:54.734895  401240 status.go:174] checking status of multinode-823603-m02 ...
	I1227 10:00:54.735197  401240 cli_runner.go:164] Run: docker container inspect multinode-823603-m02 --format={{.State.Status}}
	I1227 10:00:54.753028  401240 status.go:371] multinode-823603-m02 host status = "Running" (err=<nil>)
	I1227 10:00:54.753056  401240 host.go:66] Checking if "multinode-823603-m02" exists ...
	I1227 10:00:54.753382  401240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-823603-m02
	I1227 10:00:54.770926  401240 host.go:66] Checking if "multinode-823603-m02" exists ...
	I1227 10:00:54.771239  401240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:00:54.771289  401240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-823603-m02
	I1227 10:00:54.788499  401240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33268 SSHKeyPath:/home/jenkins/minikube-integration/22343-297941/.minikube/machines/multinode-823603-m02/id_rsa Username:docker}
	I1227 10:00:54.885998  401240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:00:54.902714  401240 status.go:176] multinode-823603-m02 status: &{Name:multinode-823603-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 10:00:54.902749  401240 status.go:174] checking status of multinode-823603-m03 ...
	I1227 10:00:54.903064  401240 cli_runner.go:164] Run: docker container inspect multinode-823603-m03 --format={{.State.Status}}
	I1227 10:00:54.923704  401240 status.go:371] multinode-823603-m03 host status = "Stopped" (err=<nil>)
	I1227 10:00:54.923728  401240 status.go:384] host is not running, skipping remaining checks
	I1227 10:00:54.923736  401240 status.go:176] multinode-823603-m03 status: &{Name:multinode-823603-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-823603 node start m03 -v=5 --alsologtostderr: (7.739813996s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (71.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-823603
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-823603
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-823603: (25.13960614s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-823603 --wait=true -v=5 --alsologtostderr
E1227 10:01:42.756812  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-823603 --wait=true -v=5 --alsologtostderr: (45.752880858s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-823603
--- PASS: TestMultiNode/serial/RestartKeepsNodes (71.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 node delete m03
E1227 10:02:15.339346  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-823603 node delete m03: (4.963260689s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-823603 stop: (23.932408363s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-823603 status: exit status 7 (97.170531ms)

                                                
                                                
-- stdout --
	multinode-823603
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-823603-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-823603 status --alsologtostderr: exit status 7 (111.128259ms)

                                                
                                                
-- stdout --
	multinode-823603
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-823603-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:02:44.245744  409114 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:02:44.245909  409114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:02:44.245918  409114 out.go:374] Setting ErrFile to fd 2...
	I1227 10:02:44.245928  409114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:02:44.246229  409114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:02:44.246451  409114 out.go:368] Setting JSON to false
	I1227 10:02:44.246486  409114 mustload.go:66] Loading cluster: multinode-823603
	I1227 10:02:44.246623  409114 notify.go:221] Checking for updates...
	I1227 10:02:44.246914  409114 config.go:182] Loaded profile config "multinode-823603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:02:44.246933  409114 status.go:174] checking status of multinode-823603 ...
	I1227 10:02:44.247517  409114 cli_runner.go:164] Run: docker container inspect multinode-823603 --format={{.State.Status}}
	I1227 10:02:44.267616  409114 status.go:371] multinode-823603 host status = "Stopped" (err=<nil>)
	I1227 10:02:44.267645  409114 status.go:384] host is not running, skipping remaining checks
	I1227 10:02:44.267652  409114 status.go:176] multinode-823603 status: &{Name:multinode-823603 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 10:02:44.267696  409114 status.go:174] checking status of multinode-823603-m02 ...
	I1227 10:02:44.268089  409114 cli_runner.go:164] Run: docker container inspect multinode-823603-m02 --format={{.State.Status}}
	I1227 10:02:44.301144  409114 status.go:371] multinode-823603-m02 host status = "Stopped" (err=<nil>)
	I1227 10:02:44.301175  409114 status.go:384] host is not running, skipping remaining checks
	I1227 10:02:44.301182  409114 status.go:176] multinode-823603-m02 status: &{Name:multinode-823603-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-823603 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-823603 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.040336802s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-823603 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-823603
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-823603-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-823603-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.55277ms)

                                                
                                                
-- stdout --
	* [multinode-823603-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-823603-m02' is duplicated with machine name 'multinode-823603-m02' in profile 'multinode-823603'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-823603-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-823603-m03 --driver=docker  --container-runtime=crio: (27.940809928s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-823603
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-823603: exit status 80 (365.884601ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-823603 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-823603-m03 already exists in multinode-823603-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-823603-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-823603-m03: (2.073662186s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.54s)

                                                
                                    
x
+
TestScheduledStopUnix (102.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-425603 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-425603 --memory=3072 --driver=docker  --container-runtime=crio: (26.079274023s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-425603 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 10:04:38.117312  417546 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:04:38.117479  417546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:38.117509  417546 out.go:374] Setting ErrFile to fd 2...
	I1227 10:04:38.117530  417546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:38.117814  417546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:04:38.118116  417546 out.go:368] Setting JSON to false
	I1227 10:04:38.118258  417546 mustload.go:66] Loading cluster: scheduled-stop-425603
	I1227 10:04:38.118698  417546 config.go:182] Loaded profile config "scheduled-stop-425603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:04:38.118821  417546 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/scheduled-stop-425603/config.json ...
	I1227 10:04:38.119050  417546 mustload.go:66] Loading cluster: scheduled-stop-425603
	I1227 10:04:38.119214  417546 config.go:182] Loaded profile config "scheduled-stop-425603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-425603 -n scheduled-stop-425603
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 10:04:38.580834  417636 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:04:38.580998  417636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:38.581031  417636 out.go:374] Setting ErrFile to fd 2...
	I1227 10:04:38.581060  417636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:04:38.581431  417636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:04:38.581787  417636 out.go:368] Setting JSON to false
	I1227 10:04:38.582879  417636 daemonize_unix.go:73] killing process 417569 as it is an old scheduled stop
	I1227 10:04:38.583662  417636 mustload.go:66] Loading cluster: scheduled-stop-425603
	I1227 10:04:38.584989  417636 config.go:182] Loaded profile config "scheduled-stop-425603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:04:38.585085  417636 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/scheduled-stop-425603/config.json ...
	I1227 10:04:38.585298  417636 mustload.go:66] Loading cluster: scheduled-stop-425603
	I1227 10:04:38.585437  417636 config.go:182] Loaded profile config "scheduled-stop-425603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1227 10:04:38.594679  299811 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/scheduled-stop-425603/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-425603 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-425603 -n scheduled-stop-425603
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-425603
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-425603 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 10:05:04.541816  418111 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:05:04.542013  418111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:05:04.542042  418111 out.go:374] Setting ErrFile to fd 2...
	I1227 10:05:04.542062  418111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:05:04.542355  418111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:05:04.542677  418111 out.go:368] Setting JSON to false
	I1227 10:05:04.542826  418111 mustload.go:66] Loading cluster: scheduled-stop-425603
	I1227 10:05:04.543218  418111 config.go:182] Loaded profile config "scheduled-stop-425603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:05:04.543343  418111 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/scheduled-stop-425603/config.json ...
	I1227 10:05:04.543583  418111 mustload.go:66] Loading cluster: scheduled-stop-425603
	I1227 10:05:04.543753  418111 config.go:182] Loaded profile config "scheduled-stop-425603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-425603
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-425603: exit status 7 (71.692903ms)

                                                
                                                
-- stdout --
	scheduled-stop-425603
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-425603 -n scheduled-stop-425603
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-425603 -n scheduled-stop-425603: exit status 7 (67.604143ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-425603" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-425603
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-425603: (4.357842493s)
--- PASS: TestScheduledStopUnix (102.07s)

                                                
                                    
x
+
TestInsufficientStorage (12.62s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-217644 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-217644 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.04252509s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"196d1135-8239-4c55-92d8-b6004211d043","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-217644] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1097195b-f37c-49ba-a3a6-b2ca52690655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22343"}}
	{"specversion":"1.0","id":"e0daa9c1-a119-41a1-bb24-b310014dc6d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d90b1762-fc20-4279-bbae-c3c66b1ffdb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig"}}
	{"specversion":"1.0","id":"c2fa8e97-e058-4bda-bceb-71e1d7b979a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube"}}
	{"specversion":"1.0","id":"922b12aa-1a1d-4f18-9d22-43a4540fcd7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"517b0f96-f45f-4b14-bfce-c5f994d30cb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c9469d18-1e6f-4c84-8997-a638baa23a9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d8b4c011-fc0f-4762-89e1-83e7558a39ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8ed4f11f-1744-4790-a6c2-15a4e8137b27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ab67d45-dc2a-4a45-b7da-b6f7d0fb3244","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e8ae33c9-86df-41cc-9b31-55453bd109b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-217644\" primary control-plane node in \"insufficient-storage-217644\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f807d713-ccd7-4505-b6b1-33c532214e00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e7053a83-e742-47dd-9584-9dff5de0fc4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"04fd4ba9-f41b-4aca-882c-c02b352bb11e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-217644 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-217644 --output=json --layout=cluster: exit status 7 (308.022658ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-217644","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-217644","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:06:04.383337  420016 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-217644" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-217644 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-217644 --output=json --layout=cluster: exit status 7 (301.006344ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-217644","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-217644","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:06:04.686484  420082 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-217644" does not appear in /home/jenkins/minikube-integration/22343-297941/kubeconfig
	E1227 10:06:04.696953  420082 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/insufficient-storage-217644/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-217644" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-217644
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-217644: (1.962888943s)
--- PASS: TestInsufficientStorage (12.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (302.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3003150098 start -p running-upgrade-619036 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3003150098 start -p running-upgrade-619036 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.93821166s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-619036 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1227 10:10:18.385988  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:11:42.756732  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:12:15.338926  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-619036 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.176303233s)
helpers_test.go:176: Cleaning up "running-upgrade-619036" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-619036
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-619036: (1.990949499s)
--- PASS: TestRunningBinaryUpgrade (302.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (107.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-237695 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-237695 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.571591307s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-237695 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-237695 --alsologtostderr: (1.51020653s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-237695 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-237695 status --format={{.Host}}: exit status 7 (90.448615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-237695 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-237695 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.621511985s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-237695 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-237695 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-237695 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (117.86452ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-237695] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-237695
	    minikube start -p kubernetes-upgrade-237695 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2376952 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-237695 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-237695 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-237695 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.105154084s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-237695" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-237695
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-237695: (2.330006545s)
--- PASS: TestKubernetesUpgrade (107.49s)

                                                
                                    
x
+
TestMissingContainerUpgrade (110.48s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3813107292 start -p missing-upgrade-651060 --memory=3072 --driver=docker  --container-runtime=crio
E1227 10:06:42.757242  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3813107292 start -p missing-upgrade-651060 --memory=3072 --driver=docker  --container-runtime=crio: (1m2.101635545s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-651060
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-651060
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-651060 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1227 10:07:15.338413  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-651060 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.2308366s)
helpers_test.go:176: Cleaning up "missing-upgrade-651060" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-651060
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-651060: (2.443774294s)
--- PASS: TestMissingContainerUpgrade (110.48s)

                                                
                                    
x
+
TestPause/serial/Start (55.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-708160 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-708160 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.186280684s)
--- PASS: TestPause/serial/Start (55.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-708160 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-708160 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.639116643s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (319.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.439113325 start -p stopped-upgrade-798580 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.439113325 start -p stopped-upgrade-798580 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.974792544s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.439113325 -p stopped-upgrade-798580 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.439113325 -p stopped-upgrade-798580 stop: (3.197126725s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-798580 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-798580 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m32.595948921s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (319.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-798580
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-798580: (1.54889554s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.55s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (66.55s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-009152 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-009152 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (59.261708481s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-009152 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-linux-arm64 -p test-preload-009152 image pull ghcr.io/medyagh/image-mirrors/busybox:latest: (1.002556144s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-009152
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-009152: (6.286255616s)
--- PASS: TestPreload/Start-NoPreload-PullImage (66.55s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (56.12s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-009152 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-009152 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.847773722s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-009152 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (56.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-315809 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-315809 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (92.742699ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-315809] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (27.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-315809 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-315809 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.937267676s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-315809 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (27.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-315809 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-315809 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.793103407s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-315809 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-315809 status -o json: exit status 2 (315.262549ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-315809","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-315809
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-315809: (1.993020058s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-315809 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-315809 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.76129044s)
--- PASS: TestNoKubernetes/serial/Start (7.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22343-297941/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-315809 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-315809 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.439844ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-315809
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-315809: (1.307475842s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-315809 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-315809 --driver=docker  --container-runtime=crio: (6.901338789s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-315809 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-315809 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.085938ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-785247 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-785247 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (184.402989ms)

                                                
                                                
-- stdout --
	* [false-785247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:16:26.996414  471555 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:16:26.996598  471555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:16:26.996628  471555 out.go:374] Setting ErrFile to fd 2...
	I1227 10:16:26.996650  471555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:16:26.996918  471555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-297941/.minikube/bin
	I1227 10:16:26.997356  471555 out.go:368] Setting JSON to false
	I1227 10:16:26.998231  471555 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7140,"bootTime":1766823447,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 10:16:26.998336  471555 start.go:143] virtualization:  
	I1227 10:16:27.005030  471555 out.go:179] * [false-785247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:16:27.009070  471555 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:16:27.009189  471555 notify.go:221] Checking for updates...
	I1227 10:16:27.015229  471555 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:16:27.018153  471555 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-297941/kubeconfig
	I1227 10:16:27.021068  471555 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-297941/.minikube
	I1227 10:16:27.024073  471555 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:16:27.027017  471555 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:16:27.030471  471555 config.go:182] Loaded profile config "force-systemd-env-193016": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I1227 10:16:27.030611  471555 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:16:27.054031  471555 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:16:27.054151  471555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:16:27.116441  471555 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:16:27.107441703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:16:27.116544  471555 docker.go:319] overlay module found
	I1227 10:16:27.119741  471555 out.go:179] * Using the docker driver based on user configuration
	I1227 10:16:27.122595  471555 start.go:309] selected driver: docker
	I1227 10:16:27.122616  471555 start.go:928] validating driver "docker" against <nil>
	I1227 10:16:27.122643  471555 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:16:27.126115  471555 out.go:203] 
	W1227 10:16:27.128824  471555 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1227 10:16:27.131631  471555 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-785247 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-785247" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-785247

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-785247"

                                                
                                                
----------------------- debugLogs end: false-785247 [took: 3.351927582s] --------------------------------
helpers_test.go:176: Cleaning up "false-785247" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-785247
--- PASS: TestNetworkPlugins/group/false (3.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (58.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (58.102694949s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (58.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-482317 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b0c0fdc8-8b9b-4e39-882b-71311e66855c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b0c0fdc8-8b9b-4e39-882b-71311e66855c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002996878s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-482317 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-482317 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-482317 --alsologtostderr -v=3: (12.058402047s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482317 -n old-k8s-version-482317
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482317 -n old-k8s-version-482317: exit status 7 (94.568321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-482317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-482317 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.265760213s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-482317 -n old-k8s-version-482317
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jnpvk" [15c15981-5af5-4212-be07-05f623f48f13] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003228076s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jnpvk" [15c15981-5af5-4212-be07-05f623f48f13] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003865743s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-482317 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-482317 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 10:26:42.758039  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (42.789194778s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-784377 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [64a81aa2-3d2b-45b0-8ec3-053991c36e9f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [64a81aa2-3d2b-45b0-8ec3-053991c36e9f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00289178s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-784377 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-784377 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-784377 --alsologtostderr -v=3: (12.271181344s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377: exit status 7 (65.554235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-784377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 10:27:15.339365  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-784377 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (46.194898819s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-784377 -n default-k8s-diff-port-784377
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-v59x7" [7c311381-20c6-4daf-9410-67c35fe6ce3b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005154556s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-v59x7" [7c311381-20c6-4daf-9410-67c35fe6ce3b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00319452s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-784377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-784377 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (48.194424319s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (56.892951099s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-367691 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dc858700-a966-41a6-8e94-31faef3ddea6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dc858700-a966-41a6-8e94-31faef3ddea6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004649354s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-367691 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-367691 --alsologtostderr -v=3
E1227 10:29:27.757123  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:27.762405  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:27.772640  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:27.792873  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:27.833125  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:27.913400  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:28.073650  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:28.394165  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:29.035265  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:30.316103  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:32.877020  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-367691 --alsologtostderr -v=3: (12.231675324s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-367691 -n embed-certs-367691
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-367691 -n embed-certs-367691: exit status 7 (98.213049ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-367691 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 10:29:37.998126  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:48.239087  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-367691 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (48.998315851s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-367691 -n embed-certs-367691
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-241090 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [67c83974-917e-46ed-b633-b33ab87382c0] Pending
helpers_test.go:353: "busybox" [67c83974-917e-46ed-b633-b33ab87382c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [67c83974-917e-46ed-b633-b33ab87382c0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003279753s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-241090 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-241090 --alsologtostderr -v=3
E1227 10:30:08.719346  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-241090 --alsologtostderr -v=3: (12.027429715s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241090 -n no-preload-241090
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241090 -n no-preload-241090: exit status 7 (69.724392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-241090 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-241090 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (49.703060936s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-241090 -n no-preload-241090
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-27bs2" [5ae8c999-7925-404e-b6f6-f922441a81a4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003601585s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-27bs2" [5ae8c999-7925-404e-b6f6-f922441a81a4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006965692s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-367691 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-367691 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 10:30:49.679611  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (33.058144805s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-8fsf7" [78d72f89-478e-4c2d-8eed-a53863277e5c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003843376s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-8fsf7" [78d72f89-478e-4c2d-8eed-a53863277e5c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004306817s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-241090 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-241090 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-443576 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-443576 --alsologtostderr -v=3: (1.479795657s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-443576 -n newest-cni-443576
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-443576 -n newest-cni-443576: exit status 7 (103.474734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-443576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E1227 10:31:25.805091  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-443576 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (16.605732079s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-443576 -n newest-cni-443576
E1227 10:31:42.756890  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/functional-234952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.13s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.96s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-589969 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-589969 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (4.688840863s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-589969" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-589969
--- PASS: TestPreload/PreloadSrc/gcs (4.96s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (5.68s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-997750 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-997750 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (5.289905807s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-997750" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-997750
--- PASS: TestPreload/PreloadSrc/github (5.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-443576 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (1.05s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-755434 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-755434" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-755434
--- PASS: TestPreload/PreloadSrc/gcs-cached (1.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (54.263609146s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1227 10:31:58.650121  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:08.891217  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:11.599785  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:15.339246  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/addons-716851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:29.371830  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (51.840576433s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-785247 "pgrep -a kubelet"
I1227 10:32:39.459590  299811 config.go:182] Loaded profile config "auto-785247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-785247 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-hwsx9" [88714de2-ebb1-429d-a491-b3a5749d61b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-hwsx9" [88714de2-ebb1-429d-a491-b3a5749d61b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003562199s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-xdzzf" [98ba31ca-5fc2-46b7-89bc-4652a98920d9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004019314s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-785247 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-785247 "pgrep -a kubelet"
I1227 10:32:52.185485  299811 config.go:182] Loaded profile config "kindnet-785247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-785247 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-kv7rh" [16d2ead4-0fbd-4275-88ed-f2ae9402ae78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-kv7rh" [16d2ead4-0fbd-4275-88ed-f2ae9402ae78] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.0035575s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-785247 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.964185139s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1227 10:34:27.756564  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/old-k8s-version-482317/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.176526987s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-pmtb4" [998ffed3-cac9-414a-84d8-f64201415848] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004148785s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-785247 "pgrep -a kubelet"
I1227 10:34:32.176904  299811 config.go:182] Loaded profile config "custom-flannel-785247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-785247 replace --force -f testdata/netcat-deployment.yaml
E1227 10:34:32.252457  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/default-k8s-diff-port-784377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-8ss7v" [f2d3b451-67b0-4ccd-8050-5377267178b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-8ss7v" [f2d3b451-67b0-4ccd-8050-5377267178b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003949062s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-785247 "pgrep -a kubelet"
I1227 10:34:35.812739  299811 config.go:182] Loaded profile config "calico-785247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-785247 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-x8d79" [de7611f9-9d20-4384-ab60-e93b04f32ddd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-x8d79" [de7611f9-9d20-4384-ab60-e93b04f32ddd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00394734s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-785247 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-785247 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m11.116244992s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1227 10:35:15.962359  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:35:36.442959  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.557247794s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-jpngf" [9903cb97-06b7-4deb-a78c-c8f953e82094] Running
E1227 10:36:17.403209  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/no-preload-241090/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004252088s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-785247 "pgrep -a kubelet"
I1227 10:36:17.826325  299811 config.go:182] Loaded profile config "flannel-785247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-785247 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-fx59d" [7adc44ee-e762-4451-a4f0-245ebfeb0f4c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-fx59d" [7adc44ee-e762-4451-a4f0-245ebfeb0f4c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003256693s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-785247 "pgrep -a kubelet"
I1227 10:36:21.143871  299811 config.go:182] Loaded profile config "enable-default-cni-785247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-785247 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-j8rtp" [b48f4e8f-5d64-4e36-8a5a-60795cff010d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-j8rtp" [b48f4e8f-5d64-4e36-8a5a-60795cff010d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003459746s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-785247 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-785247 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-785247 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m5.705498735s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-785247 "pgrep -a kubelet"
I1227 10:38:01.397674  299811 config.go:182] Loaded profile config "bridge-785247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-785247 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-5l74d" [43607a5f-1734-4da3-89fb-378d29aeae86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-5l74d" [43607a5f-1734-4da3-89fb-378d29aeae86] Running
E1227 10:38:06.367387  299811 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-297941/.minikube/profiles/kindnet-785247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003528756s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-785247 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-785247 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.74s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-725790 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-725790" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-725790
--- SKIP: TestDownloadOnlyKic (0.74s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-913868" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-913868
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-785247 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-785247" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-785247

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-785247"

                                                
                                                
----------------------- debugLogs end: kubenet-785247 [took: 3.344703107s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-785247" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-785247
--- SKIP: TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-785247 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-785247" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-785247

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-785247" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-785247"

                                                
                                                
----------------------- debugLogs end: cilium-785247 [took: 3.672325323s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-785247" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-785247
--- SKIP: TestNetworkPlugins/group/cilium (3.83s)

                                                
                                    
Copied to clipboard